Mocap GSoC 2011 - what happened to it?

As I was searching for some training materials for Blender 2.69 / 2.7x about mocap and retargeting, etc. I remembered there was this http://wiki.blender.org/index.php/User:Benjycook/GSOC/Proposal

What happened to that project?

I am looking into using iPi Soft to to mocap and then use BVH files generated by it with Blender for my game characters, but it seems there is not whole a lot of people using mocap with Blender.

It’s a catch 22 at the moment. No one uses Blender’s mocap tools because they’re awful, which is perceived as a lack of interest in new mocap tools for Blender. A full retargeting workflow that could automatically figure out what to do with different imported capture files would be a big thing to have for animation, especially with software like iPi that make mocap so easy and cheap (I have a small mocap setup with 6 PSEye cameras that cost me less than $100 to set up). It’s pretty much impossible to use the information cleanly with Blender, however.

Well, iPi Soft for 6 PS Eyes costs ~$1500 :frowning:

For 4 PS Eyes it costs ~$500 which is reasonable. It produces clean BVH files, since the app does all the noise removal and whatnot. Also tracks hands and heat using PS Moves. So Blender really just needs retargeting.

I wonder how would I do retargeting manually, if possible at all.

I’m actually surprised this hasn’t been brought up, considering the hardware now available that can do mocap very cheaply, and its importance for a wide range of CG applications. I know there was an add-on released a while back called bloop, that used a Kinect to do motion tracking and voice commands for animation recording, but from what I heard it wasn’t very accurate, and i’m going to guess that it’s not bug-free with the latest version of Blender.

Blender doesn’t need actual capturing/analysis tools. What it needs is a robust way to get BVH file produced by the leading systems on the market, and transfer anims to your character if the most efficient manner.

Although i think this is good addition, i’d disagree that it should be the only thing. The point of Blender is to be a complete computer graphics artist toolkit, not simply one step in the pipeline or a program to depend on others. It’s the reason it has stuff like compositing, motion tracking and sculpting tools in the first place. Especially considering how expensive and not very user friendly other solutions on the market are, I think this is something that should be considered for Blender, if someone was ever willing to develop it.

Come on, we are going to spiral into what many other threads have spiralled into. Blender can not possibly be all-in-one. There are many several specialized tools that do the job many times better than Blender.

Blender also has a ton of existing tools that are in alpha state so to speak. Adding yet another specialized tool that won’t be as good as what’s out there already is waste of development time.

$500 is only expensive if you aren’t going to use it to make money. Sure, investing into software you will use for fun every now and again is expensive. That’s not what they design such software fore.

If you’re spending $500 on PSEyes, you’re shopping wrong O_O

But it would be a useful first thing, no?

http://ipisoft.com/software/basic-edition/
He meant the price of the software.

For 4 PS Eyes it costs ~$500 which is reasonable.

This is what I was referring to.

Yes, and let me spell it out differently, so perhaps it’s easier to understand.

iPi Soft with support for four PS Eye cameras cost ~$500.

The reason it was not adopted was probably because it didn’t work that well. I tested the GSoC mocap code. Sometimes it worked great, most the time a lot less so.

The mocap addon mostly succeeds on retargeting simple movements, while it takes a a lot of trial and error. I was able to test it with a mocap session one of our animators captured. I asked him especially to give me a hard session to work on. It was a capoiera dance he recorded with no stationary position, skeleton kept walking around the space. Anyway I was able to retarget most of it but when he would spin on his back doing a shoulder spin-out ; the targeting would start to fly off in blender. I havent used motion builder but I remember there is a handy option where you anchor the root of the skeleton to the origin of the scene no matter where the object is in mocap space.

Anyway I reached to the conclusion that even I fixed those sliding issues I would need a way to retarget my custom rig because unconnected bones cause problems.

I personally think mocap pipeline is the #1 culprit for blender pipeline integration, there is no efficient way currently to work with motion builder or bvh , I do think blender needs to include this as a tool, like “movie clip editor” specially made for motion capture.

I searched and found this one supported on blender. Check it out if you like.

http://www.motionshadow.com/help/tutorial/blender.html

It would be, don’t get me wrong :smiley:

Still kinda expensive, :stuck_out_tongue:

Interesting, but that would be much more expensive than iPi Soft o.O And their add-on for Blender doesn’t have retargeting anyway.

Btw, isn’t retargeting is simply copying rotations of the most bones and rot/loc for root bone?

yes its just copying the rotations of the reference bone to the target bone. Depending on the global space coordinates of the bvh data.

However there are complicated issues like multiple skeletons and anchoring root of a non stationary skeleton. Orientation mismatch created from bvh files from different sources. stitching separate mocap data to make a single track etc.

I haven’t used that mocap system don’t know if it has re-targeting.

the problem is whole mocap systems are compatible with biped skeleton system. Motion builder complements both maya and max, they both have biped. So when you create a pipeline you just use a biped rig and motion builder. If you need special things you can re-target a custom rig to a biped skeleton inside max/maya.

mocap is essential to game development and slowly and steadily movies are using realtime mocap sessions for vfx and acting. Blender needs a quick and sturdy solution to this issue , or I’m afraid it going to lag behind…

Short movie made using this pipeline > http://vimeo.com/kevinmargo/constructteaser

Making of > http://youtu.be/nnaz8q6FLCk

@@yii7: Well, that’s nice and all, but $6,000 for OptiTrac cam + plugin, then add motion builder on top of that + all the markers/cameras… and monster PC they used… That’s what I call out of budget for indies :slight_smile:

The way I think it’s done for root, is that you simply copy rot/loc from BVH root to your rig’s root. What kind of issue would happen? You can flip your root to match BVH root. Plus, if you are making your own mocaps, you already know the structure of the BVH rig. It will always be the same from session to session.

iPi can only track up to 2 people at once. And that would be ~$1500 version of iPi Soft. For crowds, you could just mocap each actor, and then place them all into one scene. Let’s face it - if you are aiming for Hollywood grade mocap and 2+ characters per scene at once, Blender and iPi Soft are not the most suitable tools for the job to begin with :slight_smile: