Quantum Human for Maya

I just saw this and it blew my mind… o_O

Assuming that money is not a problem, and the BF can hire the developers needed… Do you think it’s possible to develop such a system for Blender as it is today…?

Is there any other system similar to this? I haven’t seen anything like it before.

I’d like to know what do you think about it :slight_smile:http://www.quantum-human.com/

Happy New Year for everybody :smiley:

I think that this is using a lot of Maya infrastructure to get the end results.
Blender is a much younger application, with much less underlying functions. You can get there by tackling one problem at a time.

For example better animation system- with animation layers and retargeting - better at dealing with mocap data. This is being tackled by refactoring the depsgraph

I don’t think this is just a string of Maya operations run in tandem, this is some REALLY advanced stuff. Just from the first demo video I can see mesh partitioning, BVH analysis, curvature analysis, muscle structure estimation, and something way more complex than just heat weighting for the automatic weight. And lots of stuff I can’t even figure out without actually playing with it.

As always, code is code, and it COULD be done in Blender. Whether enough is currently exposed by the API to make it not a total nightmare is another matter, and I’m not sure how well python would handle this, but I’d love to see someone give it a go!

Back to picking this tool apart…

Assuming the right developer is willing to code it and money is not a problem - yes.

However I am seriously doubting that this quantum human plugin works nearly as well as they make it seem. Most likely it will only work for an extremely limited number of bodies and even then you would probably have to adjust it to make it fit your character.
The biggest problem with automation plugins like this is that if only a single step miscalculates (say a knee ends up a little higher than it should) the entire process will screw up and you will have to spend hours upon hours and hours to try to fix it.

Maybe I am overly skeptical but I don’t trust systems like this who claim they have the ultimate solution. Maybe I will be proven wrong (I really hope so)

@NinthJake, actually watching the sneak peek video again, I noticed an error on the placement of the knee controls…


Anyway it looks pretty cool, and even if it’s not perfect it seems like it can speed up some parts of the proces.
The only way to know for sure how good it is to actually try it out…

There is, as m9105826 stated earlier, a lot more to that add-on/plugin suite than just one Maya operation after the other. Sure, you need an API that allows you to create the polygons, rig, and connect them… but the incredibly hard bit is knowing where to create them. Analysing an arbitrary mesh & laying out a new topology that flows as well as it did in the example? Not a Maya function (and far from easy in the first place). Automatically breaking down an arbitrary mesh to determine where the arms, legs, torso, etc are? Again, not something Maya provides by through it’s SDK and quite difficult to code yourself. Motion-capture retargeting between various armature inputs and a given rig? Once again, not just a call into Maya’s API and something that we still haven’t got a robust version of (despite a GSOC project into exactly that).

Quantum Human is a large project and the result of a lot of coding, not just gluing various elements of Maya together. Quite honestly, despite the minor flaws I can see in the demonstration videos (some topology artifacts, knee placement, bad accessory morphs, etc), I think QH has done a bloody awesome job… especially if (as I suspect) one can correct the minor issues by stepping in manually at various points along the pipeline. Mark an area on the input mesh as needing topology flow this way rather than that (looks like they can be drawn on), move the knees up, tweak the accessory morph / placement, etc. Still would save a butt-load of time going from sculpt to animated output.

@BTolputt you’re probably right. I don’t know the technical aspect of it, and really don’t understand much about coding at all; maybe that’s why is so easy for me to just say it has errors in the videos. Again, as I said in the first post it really blew my mind… There’s just a LOT of stuff going on in there.

And, as @NinthJake said I also believe it looks a little bit too awesome, is hard to believe all that is possible with just one click of a button; and that’s why I also wonder if there’s any other system like this out there. Maybe there aren’t any because of the massive effort it represents… I don’t know, I really wish I knew something about coding to understand how something like this can be actually done.

Anyway, it would be great to have something like that in Blender (one can always dream right?)

Very impressive but as others have voiced their concern, i too doubt how universally usable this is as they always demonstrate such things in ideal condition. Keep in mind that complex systems introduce overhead, hide functionality(black box in .dll’s) and in most cases may be total overkill that can prevent efficient character specific rig functionality. You would get lost in complexity or be limited in what you can/cant do. In many cases adding/tweaking the functionality may even break the rig all together (e.g CAT). This may not be the case here, but plugins in general do not work well with network rendering (due to plugin dependency, extra license requirement and many mismatches) - it can be very painful. Additionally keep in mind that maya is single threaded (mostly) and IF you would PURELY define such a solution with mayas built in nodes and expressions then you may have a clunky/heavy/uncomfortable rig to work with(they likely have some performance toggle though). In nutshell while it may work extremely well for certain production studios (working with army of 3d scanned characters), i think for most people it’s simply a wrong thing to drool about.

The plugin looks to be quite “front heavy” with the heavy lifting done in the processing step and the end result being a “standard” rig for Maya with “standard” animation. Essentially what this tool is doing is automating the retopology, rigging, and mocap retargeting steps of going from sculpt to animated figure. The most render-time expensive part is the muscle animation and that is a Maya standard feature.

Whilst I can’t be 100% sure without the latest versions of Maya & QH, I’m pretty confident that, once the sculpt is retopologised and rigged, the scene will not rely on any QH dll files to run.

It would be nice having similar system in Blender. Bits and pieces are already in Blender. It just all needs to be polished and tied into an artist friendly UI.

Instead of autoretopo, a base mesh of various resolutions could be fitted (shrink wrapped) over sculpted (or otherwise constructed) mesh. Rigify (or modified to fit game engine requirements Rigify) could be used for autorigging.

I am mostly seeing such system as huge help in the game development world. Maybe in production of some YouTube / TV series. Maybe to make characters for crowds in full scale movie / animation production.

Blender needs a standard built in muscle system first. There’s one on the blender market, but it’s not user friendly yet as it doesn’t handle the most repetitive time consuming tasks in muscle-system creation. Hopefully it will get there eventually though.

There are no muscles in video games, and I doubt muscles are used in crowds / backplates.

Many games have cutscenes that are sometime ingame material, sometime videos and can feature higher poly characters than normal gameplay and then may take advantage of a muscle system (of course depending on whatever the cutscene is supposed to be about).

I don’t think this could be helpful just for videogames. Even if it’s not perfect, is a really huge time saver with the most repetitive and tedious tasks.

Let’s say you have to create a character like gollum for a lowbudget movie… Using a system like this you would be saving a lot of time that could be used to polish other aspects of the character like modeling and texturing, later refining the auto-rig and it’s weights, better animation or refining motion capture data… I can imagine a lot of people using it as part of the process instead of a final solution.

Seems to me like a really helpful system, no matter what is it used for.

I think the Blender devs. would be better off creating generalized (though cutting edge) rigging tools that works for all rig types and not just human models.

It would provide benefit to a lot more cases out there, since a lot of productions do not just put humans in their films.

I agree, besides, Rigify can work for either bipeds or quadrupeds, its a great tool for rigging!

100% Agreed.

// regarding game use comment. Games need ultra-light rigs, especially indie and mobile games - stuff concerning blender community foremost. Every bone counts and as such complex rig systems are rarely used. Further more in mayas case you usually make a second rig on top of the complex rig and export that. I can hardly imagine setting it up in blender ( all the hierarchy mess ).

In 3ds max you simply have a super universal CAT rig system (for any character). You can get away with for example 40-50 objects acting as bones and make a 1 press export that works well in game. No complications - super universal, brilliant system. Only problem glitchy as most things ADSK. Taking this into account and relating to Blender I am not sure if it’s good to have a RIG GENERATOR (like rigify) or an actual custom coded (closed) system. Perhaps something Nodal instead when time is ripe

My point of view is that no matter how cool Quantum Human looks, I feel that Blender community with all its creatures and characters would rather benefit from an universal, easy to work tool than super advanced human specific tool.

Given the use of the human form in most animations people get hired for, I don’t actually agree that Blender would be better of with a less perfected but more universal tool than a more perfected human specific one (assuming equal amounts of work). If I had to choose between them in regards to BF development, despite my personal art being 50/50 in regards to humans vs non-human characters, I’d still go with a human-centric version. The development effort of getting something similar made that handles arbitrary forms (anthro with four arms, insects with thoraxes & antennae, hell even four legs & a tail!) makes the job exponentially harder.

In other words - if your goal is to have BF (or some other Blender developer) work on a similar tool but more universal in it’s applicability - the result is either going to be less useful (by a significant margin) or it’s going to take forever… and we all know what happens to most long projects in Blender (hint: starts with “a” & ends in “bandoned” :wink: ).

Honestly if you want something like this somewhere in the next few years, like QH itself, it’s going to be a third party that develops it and (if it’s anything close to the functionality / utility of QH) it’s going to be SOLD as an add-on. The likelihood of this happening whist GPL is a requirement for all add-on code is remote. Best bet - break it down into the steps QH takes (auto-retopo, mesh analysis & auto-rig, auto-muscle system, etc) and implement those one at a time (either officially or as add-ons).

In general, the Blender devs. need to work towards getting Blender’s rigging toolset close to what Maya has now first.

For example, I could see it being very useful for animators to be able to generate a muscle deformation system based on vertex weights and/or bone envelopes. Not only that, but have the system also deform geometry where collisions are involved to prevent overlap.

Before we get to that even, there’s only so much that can be done before the depsgraph refactor is completed and in Master. You really can’t have something akin to Maya or XSI with the current system. Luckily this first step is being worked on with commits toward it coming almost daily.

Also not to be overlooked this system allows you to use the custom control rig feature in HIK which was being demonstrated for a lot of the features. For re-targeting mocap data and other uses the HIK system is very sophisticated and robust. Probably the best that there is on the market for that. (in particular the implementation in MotionBuilder I mean)

It would be nice to see the mocap tools in Blender developed further. I think they only went so far, and last time I investigated them they were a great addition but just needed to be taken further. I do believe they currently will work with Rigify though. At lease that’s what Benjy Cook said in his tutorials, though I have not gotten that far into it yet.

http://wiki.blender.org/index.php/Extensions:2.6/Py/Scripts/Animation/Motion_Capture_Tools

You can google Benjy Cook and Mocap Tools for the vids.