Mucking about with Faceshift motion capture

First off I should note that this represents whats basically my first attempts at animation.
For those unaware - Faceshift is/was a depth based motion capture suite that used sensors like the Carmine, Kinect or the Intel Realsense F200 devkit. It was bought out by Apple about half way through last year. Safe to say that the only version available now is a torrent that won’t let you save or export one too many times. But it does let you export .FBX a couple of times per session. Annoying but workable.
Twas also my first attempts at using MAYA - Blender has been my staple since I started working in 3d about 2.5 years ago as a hardsurf modeler.

This was my first attempt - the realsense was mounted in an awkward high position and honestly the profile of blendshapes it built for me wasn’t that great. Making this took about 15 minutes in total not including render time(about 3hrs).

My second attempt included a move to the center of my main monitor and some extra USB juice from a powered hub, which helped as the realsense is super picky about USB power - Even an x99 sabertooth motherboard apparently doesn’t quite deliver a good enough supply. This time around the profile was much more consistent. Still getting a little eyelid weirdness but I think it was much more responsive. Some motion graph curves were edited here but I’m still learning how that all fits together.

Next up: Acquiring a snorricam setup along with a Brekel and kinect based body capture system for the spare room.
I’d like to be able to one day make a Star Trek animated webseries that I’ve been creating for about a year now - this is all in the pursuit of that goal.

Any comments and/or advice would be welcome.