MOCAP to animate Blender Virtual Camera?

What I would like to do is mocap my camera that I am shooting my actors on, they are in front of a green screen. Then I would like to import that mocap data into blender, parent it to the virtual camera so that it moves exactly how the real world camera did. I know you can get this same effect via Camera Tracking, we are doing that now, but I am interested in trying the way I described. It would cut down on set up time in terms of placing markers…post production time due to keying or masking the markers out.

So here is my question…is there a DIY/cheap MOCAP set up that any of you know about that could be done on a small budget, that works well with Blender? Something that requires placing markers on the camera, not so much like Kinect that just reads an outline and replicates it.

I really appreciate your help!

you mean a motion control rig for your camera? I’m guessing your budget doesn’t extend to hiring something like a Milo motioncontrol rig! there are very few cheap options, as most low cost solutions generally dont provide a way to export the camera track data to your cg software, you would still need to track your footage as normal. though they do have the advantage of giving you multiple passes with the same camera movement, you can simply match your greenscreen with your background plate without worrying about your actual camera clips being mis matched! so not ideal but better than no motion control! ideally you would want a cameratracker that uses an exportable f-curve setup, and i have seen such a thing, but i can’t remember what it was called. sorry ( not usefull i know!) how ever i dont know if you can import f-curves into blender, you may need a special script to do it. thank you for taking the time to read this totally (unintentionally) unhelpful post. :confused:

PTAM? https://ewokrampage.wordpress.com/

“PTAM (short for Parallel Tracking and Mapping) is a piece of computer vision software which can track the 3D position of a moving camera in real time. It is a SLAM system that does not need any prior information about the world (like markers or known natural feature targets) but instead works out the structure of the world as it goes. PTAM is useful primarily for augmented reality, although it has been applied to other tasks (robot guidance) as well. The system was first demonstrated at the ISMAR 2007 augmented reality conference, and the source code of the PC version has since been made available for general non-commercial use. This blog has news and updates relating to the PTAM source code, which is available at www.robots.ox.ac.uk/~gk/PTAM.”