Re-Face! v1.2 - Facial Mocap Retargeting Tools Addon

Hello all! I’ve just released an addon I’ve been working on for a while now. It’s called Re-Face!, and it’s available on the Blender Market.

New Features in v1.2
-Facial rig generator for humanoid characters
-Shape key / wrinkle map driver creation tools (Blender Internal and Cycles materials supported)
-Bug fixes / UI clean up / error reporting

Features in v1.1
-Facial stabilization tools - this is the biggest update to the system
-F-curve smoothing operator to remove jitter
-Lots of little option additions for various tools
-Bug fixes / UI clean up

What It Is
Re-Face! is a set of tools for retargeting facial motion capture data to a character’s facial rig, plus some graph editor utilities for quickly modifying and cleaning up mocap data.

Decent facial performance capture results are actually very easy to obtain with Blender’s “new” motion tracking tools. And the motion tracking tools have been around for a while now, but in all my own tests and research, I’d only ever seen proof-of-concept tests demonstrating that using facial performance capture is indeed possible with Blender. Until now, nothing has existed that would actually make it easy to use facial mocap in a CG animation made with Blender.

That’s why I created Re-Face!

What It Is Not
Re-Face! will not generate new motion capture for you from your footage. It will also not generate a facial rig for your character (though I am actually working on a related new secret project now ;)).

What it will do, however, is make it incredibly easy to use facial motion capture data in your next CG project!

1 Like

Great Cool Stuff !!! :yes:

Is there a capacity to bring in and retarget mocap from Brekel Pro Face or some other non-Blender mocap software to a face rig in Blender ?

Do you have any plans on making it possible to bring in iPi/Brekel mocap into Blender and retarget it to Blender’s rig ?

EDIT:

Is there head tracking too ?

I confess I haven’t used Brekel Pro Face, so I can’t say with absolute certainty, but if the mocap data is much like Blender’s native motion capture data (Empties with animation data on them), then I can’t imagine Re-Face! would have trouble with it. Do you use Brekel Pro Face? If you have some mocap data imported from it into Blender, I can certainly check the compatibility.

I’ll have to get some data exported from other applications, in order to work on automatic import of that data for retargeting. If anyone from the community is able and willing to donate that, I’ll be happy to implement that functionality. :slight_smile:

Currently, there is no head tracking. Do you mean in terms of accounting for head movement? I’ve added this to my to-do list. For setups without stationary head-mounted cameras, there can be a little movement of the head which can throw off the accuracy of the performance capture. I intend to account for this motion in the next release, if markers are used to track head movement.

Or were you referring to something else?

No, not yet. Maybe never will with your add-on :slight_smile: I was looking into getting Brekel and Kinect to create facial anim for my characters quickly and efficiently, but haven’t found retargeting tool until today.

Brekel exports anims to FBX and BVH, and there is a trial version with several exported anims on his website.

I am almost 100% positive if you get in touch with developers from iPi soft and Brekel, they’ll help out with assets.

Yeah, either to compensate for slight head movement (as I wouldn’t want to affix my head to the wall :stuck_out_tongue: ) or actually capture head movement as part of performance acting and track/retarget anim to the head bone of the rig. All with stationary cam (I assume there is no easy way to have a wearable face mocap camera without manufacturing a harness of some sort)

Cool, thanks for that. I’ll look into the exported animation data. I’ve seen Brekel Pro Face before, but I don’t have a Kinect, so I never gave the application much attention.

I’ve also been looking into helmets with camera mounts. I’ll let you know if I find anything cost-effective :slight_smile:

Very cool!

Just purchased your add-on! :slight_smile:

Do you plan on making some video tutorial explaining how to perform mocap and how to retarget it ? (I haven’t worked with camera tracking in Blender)

Track Match Blend 2 should tell you everything you need to know about camera tracking in Blender and then some.

+1 for a video tutorial about retargeting though. I have an idea about how it might work (bone A + translation data * rotation data = bone B?) but would be interested in seeing a proper workflow instead of guessing.

Awesome, thanks! :slight_smile:

I plan to make an in-depth tutorial very soon.

In the meantime, to get you started, here are some basics and tips. Import a clip into Blender’s Movie Clip Editor, and CTRL+click to add new “tracks” on each facial marker. Select all the tracks and start tracking the footage (the play button in the “Track” tab, under the “Track” panel. The shortcut is CTRL+T). Depending the settings of each track (its size, sensitivity settings, etc.), some of them may have trouble tracking and get disabled at some point during the track. If a track gets disabled, simply select it, go to the frame where it became disabled, and move it to enable it and “help” it find the marker again. Continue tracking it from where it left off. If the footage is blurry or low-resolution, some tracks may need more attention than others (eg., corners of mouth, eyelids).

Under the “Solve” tab, enable “Tripod” since the camera is stationary. Solve the camera motion.

This is where the devil is in the details. Split the view to add a view of the 3D viewport, and add a camera if there isn’t one. ALT+R to zero out its rotation, and rotate it 90 degrees so it’s pointing right down the Y axis.

In the clip editor, click “Set As Background”, then “Link Empty to Track”. This adds the video clip as the background image for the camera you created, then creates empties for each marker you tracked.

I’m actually looking at some facial mocap BVH files exported from other applications, to start work on compensating for head motion.

I’m curious how the result looks like from the side; from my own experience everything works well from the front view but when it comes tp other camera angles face expressions are no longer convincing.

I’m very excited about this, but I need to see more example animations that demonstrate the quality.

Now here is an interesting point, and a possible expansion opportunity. If we could source from multiple clips taken at different angles, and translate that averaged result to the rig, then we could have some very good quality performance capture available.

That’s actually on the to-do list. :smiley: I have my own ideas about how to accomplish that, but I’m certainly open to suggestions.

@mookie3d - I’ve thought quite a bit about that, because what’s usually missing from tracking location only at such a short distance between the camera and actor is depth information on the Y axis. The markers generally only move in the X and Z axes, with virtually no movement at all on the Y. What I’ve done is to simply create some very basic corrective shape keys and drive them with the relevant bones. That setup was really minimal - it took me about an hour to create the shape keys, and maybe 45 mins to an hour to set up the drivers.

For instance, the lips move forward when the mouth corner bones move closer to each other on the X axis; the corners of the mouth pull backward and the upper cheek area (zygomaticus?) flex / push forward when the mouth corner bones move in the opposite direction on the X axis. It’d be even more compelling if normal maps were being driven the same way (I haven’t painted any textures for this head; it’s a MakeHuman head with the default textures) - one for the brow furrowing, for the corners of the eyes creasing, the bridge of the nose folding, etc.

Impressive new video!

I can see what mookie was saying about depth though, it’s only noticeable on the lips as they enunciate words that don’t have much emphasis, I never realized how important depth was for re-targeting phonemes. Regardless though, it’s still the best results I’ve seen for blender! I imagine people will be pretty satisfied as it is (I know I would be :D).

The way I always average the result of multiple animations, I have “copy transform” constraints that sample the different animations at a fractional influence (the total of all the constraint influences together has to equal 1.00). I’m not sure if my method is even remotely relevant to the tracking system, but I thought I should throw it out there, hope it stirs some idea’s :slight_smile:

I have a question about solving Jaw movement. Since we’re capturing face motion with markers on the video, it’s a very ‘dermal’ interpretation, how does this motion transfer to the jaw bone? A real jaw pivots at a hinge to create the up/down motion; but using markers, you’re only getting translation data. (moving the chin armature?) Which would pose a problem for trying to capture anatomical facial motion.

Is this something that can be resolved, or is this a non-issue?

This looks pretty sweet. Does the add-on come with a “generic” facial rig? If not, are there any tutorials available? All of my characters have shape-keys already…so I think this would be an excellent add-on to drive those shape-keys for a much better animation.

It would be nice for add-on to work with Brekel’s output. This way if this case becomes an issue, you could always fall back on Brekel’s mocap data.

I have not tested with mocap yet, but my usual jaw rig has the long jaw bone tracked to a target on the surface of the chin. This way I never directly rotate on that hinge, just put the chin where I want it and the jaw follows behind. This should be compatible with that solution.

Thank you! I wondered how other people were doing it, I’ll do my rig like that then :slight_smile:

If you’re using predefined shapekeys driven by performance capture, you risk losing the realism of the performance.
Here is an example of how this frequently turns out:

It’s not terrible, but it looks like the face expressions were simply keyframed by hand. I know for a fact that they did performance capture (I watched the behind the scenes) like many, they set up shapekeys for every expression, and had the performance capture drive the timing of them.

To achieve a more convincing effect, you need the performance capture to drive the individual points on the face itself. Think of it as a “loss-less translation”
Example:

This addon creates a result that’s closer to Avatar, as long as we get out of its way, and let it retarget the positions from the video for us. In some cases, corrections would need to be made by us, but beyond that the final result will look more believable.

Ok, gotcha…so I will want to actually rig my face to the face-rig and weight paint accordingly. Then use this add-on to conveniently transfer the data from the tracked video to the face-rig.