After Effects CS6 Camera to Blender?

I’m a video editor, and previous did a lot with character animation and modeling. I need to composite an animated character into a 7 second cinematic. I’m aware Blender has camera tracking capability, but I couldn’t even seem to get 1 good tracker, and I messed with it for hours, watched tutorials and everything. In After Effects CS6, two clicks(literally) wait 2 minutes and it’s camera tracked. I can also use a more advanced plugin to track harder stuff, also is fast and works great. I have found a way to get my Afterfx camera into Cinema 4D, but it’s proving to be extremely difficult to work with.

If I could export/import my camera(and hopefully a solid/null to mark the floor) from Afterfx into Blender I could do some pretty awesome stuff, really fast.

I already found an incomplete script and tried to mess around with other things, but can’t find an effective solution.:frowning:

If anybody knows of a script, plugin, or some other solution to my problem please help me out. Thanks.

BA user Atom had an AE-to-Blender script for a while, but I don’t know if he’s updated it along with the version upgrades of Blender or AE.

1 Like

I get some success with exporting the camera information from After Effect to Collada with this script: http://clintons3d.com/plugins/aftereffects/index.html and then importing it inside blender. Unfortunately the Collada don’t export in a standard way all the information for the camera, the sensor size is missing for example. You can contact Reese at clintons3d.com he is a great resource. I also developed a patch to correctly import the camera sensor size from a Collada file (you may have some manual editing to do), you can get it here: http://projects.blender.org/tracker/?func=detail&aid=32834&group_id=9&atid=127

My workflow was also involving Mocha AE for the camera tracking (it was a complex tripod movement on a 70’s movie)

I ended up just doing it in Cinema 4D in the time I was waiting for this post to be approved.
I ran into Atom’s script multiple times and got an incorrect camera, a problem that Atom admitted he could not fix.

The other camera tracker I use is the older one from The Foundry, it’s a lot more advanced but also a whole lot less streamlined than AfterFX in-house tracker.

VisualFox, I would try that, but I have no idea how to start. I have found a solution for now though: Blender > OBJ sequence > Element 3D plugin. This will work very well in many cases as element renders wayyy faster than Blender.

Also I was not aware Mocha was capable of camera tracking, I thought it was just advanced 2D tracking?

Thanks guys!

Glad you found a way which work for you. For Mocha you need Mocha AE, or Mocha Pro not the version bundled with After Effect.

Almost a year later… I’m curious if anyone has made any further progress on getting after effects info into Blender. I still get better camera tracking through the tracker in AE6 -and prefer using it - but everything I’ve read in "search’ seems to be centered around going from blender to AE and not so much the other way… Anyone here know if anything has advanced on this front - and where I might look for info. Thanks.

1 Like

I tried again recently I just can’t figure out the math for exporting the camera.

My test workflow is this.

  • Make a scene in Blender and export the scene to After Effects using Bartek’s script.
  • Run Bartek’s script in After Effects to generate a comp.
  • Run my script in After Effects, on Bartek’s generated comp, to make a python script.
  • Run the python script in the same Blender scene to round trip back and compare to the original Blender camera.

I can generate a camera from After Effects and export it to Blender but the rotation is off. The keyframes are intact and the camera moves close to Bartek’s original it is just not pixel perfect. This is one of those tasks where I wish some math expert could just take a look and produce a formula that would work. I am just guessing on axis mapping and scales.

It is the rotation I can’t figure out. Location seems fairly solid.

I did correct some python errors due to Blender 2.69 API changes. I am attaching the Adobe After Effects Javascript in case someone gets this message and can figure out the math.

Good luck

Attachments

ATOM_AE2Blender269_01242014.zip (18 KB)

Thanks Pal - I appreciate the advice. Quite a workaround - and a few flaws to correct as you say - but it’s more than half the battle right there! :wink: I’ve been using Blender since version 1.8 many many moons ago - and it seems we always had to go through these “extreme” procedures to get other software to talk with Blender… :wink: I can remember the first time I wore my fanny to a nub sitting down at the computer getting Blender and Terragen (terrain/world generator/renderer) to talk to each other.

I don’t know how many people are following this, but (as a complete noob) it seems that we’re working this in a very roundabout manner - although that may be down to Blender.

That said, AE will allow a user to copy/paste human readable transform data for its camera which may be easier to process than fiddling internally with Javascript.

I don’t know enough about how Blender operates its animation system, but it seems that if we can keyframe transform data (and we know the basic camera data) would that not just translate directly into a bunch of keyframes?

The camera positional data (from a Mocha 3D camera solve) looks like this:

Transform    Position
    Frame    X pixels    Y pixels    Z pixels    
    0    640    360    -1439.46    
    1    639.985    360.021    -1439.48    
    2    639.99    360.027    -1439.47    

and the rotational data (strangely) like this in Z, X, Y order - this is the Z section for three frames:

Transform    Rotation
    Frame    degrees    
    0    0    
    1    0.0148358    
    2    0.0353341    

for each of the three axes.

That plus the focal length and sensor size (?) of the camera are presumably all you need to put into Blender.

I would do this myself, but I’m a bit (OK completely) vague on how Blender’s camera animation works: but if it’s keyframeable, wouldn’t these values be adaptable?

I like the idea.
The data could be pasted into a text window document.
Then a Blender python script would parse the data and route it to an object.
Only one script to maintain too.

I wonder what shows up if you copy and paste a bunch of different layer types to the clipboard? Is the clipboard robust enough to handle say 70 Layers worth of keyframes?

The JavaScript does do some extra processing like footage to an actual material mapped plane, text to font generation, nulls to empties, lights.

I had to go with the script and it helped me a lot. To fix the rotation problem of the camera I simply parented a null to the camera, which were centerd in the wive whit “cmd+home”. I also hade the formatting problem with the pyton script, but by simply open it in adobe edge code and them copy and pasted it into blender it worked fine. And by setting up track a track to constrain to the camera and the centerd null the rotation problem was resolved! :smiley: thank you a million times for a grate script!