Morph Driven Displacement?

Ok, so in my research and Blender learning I’ve come across the following videos and links. I am fairly certain they are all related and, if I can break down the parts lead to a way to have morphing displacement for models in Blender, particularly facial wrinkles for human models driven by shapekeys/drivers.

http://forums.newtek.com/showthread.php?137719-Human-Progress




I know that Chris Jones’s clip uses Lightwave, but after reading his notes on the project, I’m pretty sure that something similar can be done in Blender, right? I mean, I don’t think you can map out tension in Blender (or at least as far as I know) BUT, isn’t it possible to set up a displacement modifier that is controlled by shapekeys/drivers like they did in the Willyam Bradberry video. If this is possible, than I could take three base mesh heads, sculpt two like Chris Jones did, and then use Blender to bake displacement maps for the two sets of wrinkles, and set them up so that each of the facial rig controls could also come with a set displacement wrinkles. Then let’s say I hook up the forehead wrinkles so that when I raise a brow bone to the highest point on the z-axis, the full displacement takes effect and at a neutral point, the displacement is not visible at all.

This is possible, right?

Wait, is “Stress Mapping” in Blender similar to the tension that’s mentioned in the head rig?

This is possible in blender, I think you just need a nudge in the right direction. Think it thru… and I might be wrong as I just skimmed thru the material…

Chris Jones sculpted in the wrinkles in sculptris. Then in blender, he baked the displacement maps, giving him a gray scale image that displaces the mesh, then used lightwave to produce the face test videos. In the Willyam Bradberry video, instead of sculpting and baking the displacement map, he just paints it in, giving him a gray scale image that displaces the mesh. At the end of the day, they both did the same thing. Actually, I don’t remember seeing it in Willyam’s video, but you should be able to paint displacement maps in blender and see the results as you paint. It’s almost like sculpting, but you can only displace a mesh so much this way, which is perfect for wrinkles, as facial wrinkles in the skin are usually small.

I think Chris’s node set up is the same thing Willyam does when he sets up the driver, at the end of the day it does the same thing.

Try it out…

Randy

The real magic of chris’s method is the tension map. Depending on how much the mesh gets pulled or pushed away from it’s original point positions, it will dynamically create a vertex weight map that reflects that stretching and compressing. So, the more face gets stretches, it gets a positive value and the more it gets compressed, it get’s a negative value. Those weight map values are then used to fade in and out the different parts of the image. So, you can have two displacement maps, one for the face in it’s stretched state and one for it’s compressed wrinkled state. Negative values in the tension map will reveal the compressed map while positive values will reveal the stretched map. The nice thing is that it doesn’t require a lot of crazy drivers and setup that it would normally require to do this on an individual area and map basis.

Unfortunately though, Blender’s stress map is only available for use in Blender’s internal renderer. Cycles texture coordinates node is missing the stress option. Also, render time displacement mapping in Cycles is yet another area that needs to be completed so your only option is to use the displacement modifier.

I haven’t delved into this so I have no idea how to go about setting it up with BI but I’ll look into it. I would suggest starting small at first. Try an arm with wrinkles at the joint when it gets compressed and then another displacement map for when it’s in it’s extended/stretched state.

I’m glad I was thinking correctly and it is apparently related.

I’ve been mulling this all over for a couple days now.

You’re exactly right in baking displacement maps from sculpts. Not too hard to do (normal and bump maps as well).

I also understand how the displacement modifier works in Willyam Bradberry’s video. And I was thinking that you could easily set up some facial control bones for shapekeys. Then, set up different bone movements to fade the displacement in and out. For example:

Brow bone up (psoitive movement on the z-axis) = forehead scrunching.
Brow bone down (negative movement on the z-axis) or towards center of face (movement on the x-axis) or = furrowed brow
Mouth corners up (positive z-axis) = laugh lines around mouth
Mouth corners down (negative z-axis) = frown lines around mouth

Is it possible to set up two different displacement maps for the same shape key like that? So that movement in the positive fades into one map and movement in the negative fades in to another? Or, movement on the x-axis of a bone does one thing while movement on the z-axis does another. I feel like that could be set up in drivers. In the closeup video of Chris’s eyes, I’m fairly certain it uses a combination of x and z axis to overlap two maps for a quite nice result.

So, say that’s all possible, does that mean having to make a new map for each of the wrinkle sets? That would mean at least four separate displacement textures (2 eye wrinkles and 2 mouth wrinkles), unless of course they could be broken into different vertex groups or somehow mapped out on one single texture file.

Another thought: displacement maps can work alongside bump/normal maps as well, correct? Fairly certain I’ve seen an Andrew Price tutorial with a cliff where he uses both. So, I could still use a good looking skin texture bump or normal map on top of the displacement map, right?

Like Indy mentioned, unfortunately Blender’s not exactly up on stress mapping, which would be perfect for something like this. Then you could have just one displacement texture in place and it would only show through when the quads on the face were stretched out or squished in - lining up with when the wrinkles would actually appear anyhow in real life. Maybe it would be worth throwing it out in the coding and add on developing forums to see if anyone would be up for it. I’m not a coder and I’ve only been working with Blender for like two months, so I just don’t quite have the experience under my belt to go too in depth just yet. But the concepts come first, I suppose, and the technical skills will follow.

Like you said though, will just have to tinker about and see if I can get some of it to work.

… just me 2 cents worth but… you could use several displacement textures and fade between using drivers for different extreme poses… so, yes you can use more than one displacement map, powered by the driver bones… another thought would be to use animated alpha maps to limit the displacement to the locale of the area in which you need it? you could even use dynamic paint to automatically “animate” these alpha-maps simply have a none rendered “brush” object which would come into contact with the places you want the “scrunching” to occur.

just a few ideas.
It wouldn’t be ideal , and would mean a lot of work setting up the drivers, rig and maps , but its possible… I would guess.

and yes… displacement and bump-mapping will work fine together…

“you could use several displacement textures and fade between using drivers for different extreme poses… so, yes you can use more than one displacement map, powered by the driver bones” I thought so. You mean that one bone can control more than one displacement map at the same time, right? It would be great to cut down the number of diplacement maps to like two if possible to save on effort and things that need to be attached.

Using an alpha map is an interesting idea, particularly as I don’t think there’s a way to use a stress map to help designate what will and won’t show up. I’ve never heard of non-rendered brush objects - seems like an interesting idea.

I’ve also seen somewhere “animated normal maps” which are pretty interesting. It would be nice not to have to sculpt every single frame, but for things like animated normal maps and animated alpha maps, can you somehow set them up so they are a series of frames that can be scrubbed through back and forth dependent on shape key/slider/driver movements? I know you can use movies or image sequences for textures in cycles nodes, but is there a way to hook them up to movement?

It is all sounding a bit complicated. Would be great to figure out something doable and a bit streamlined. But, once it’s set up, it would be worth it for a realistic model for posing or animation.

I suppose you could just set up shape keys for wrinkles and just animate them that way, but it would require a lot of extra geometry that my hardware can’t handle. Using a normal, bump or displacement map is just more practical for tiny details like that.

Eh, and maybe it’s not possible from a practical point of view. It’s always worth throwing it out there, though. This would be a pretty great tool for Blender animations.

it wouldn’t have to be very complicated tbh… here’s (briefly) what I would do…

create the regular normal or bump maps for the detail on the characters face, and also create one or two “scrunchy” maps… (wrinkled eyebrows, mouth and nose could all be on one displacement map or normal map.)

now assign both of these displacement maps/ normal maps to your material … so to “hide” the scrunched map(s) behind the regular normal map…

now dynamic paint… make the whole face object into a canvas… now you need a few “brush objects” … maybe… 2 spheres for each eye, 4 spheres for the mouth , and maybe a few around the nose area… I would attach these objects to a few extra bones on the face rig, just for easy location and use.

you can then form an alpha map using the canvas… this alpha map we will make effect the top layer (the regular facial details) of the normal/displacement maps, making that normal map “see through” in the places you want the scrunching to occur. this should then show the scrunched normal map underneath… so for example… when we want to scrunch up an eye… we move the bone holding the relevant brush “into” the face… basically

“painting -alpha- onto the top normal map, making the scrunched effect visible only in the places the brush object collides”
ofc it would mean baking the canvas to see what the scrunched effect will look like. and you would have to ensure the brushes are hidden or on another layer , also make them un-rendered, (so the character doesnt have a collection of spheres hanging off their face.)

anyways there’d a few different ways of doing this, because ofc, blenders material system is very complex and customizable…

so i thought it best not to go into crazy detail.

But… don’t get me wrong though… this would look NOTHING-like as good as the videos you posted, but its a step in a direction (not necessarily the correct direction, but its a direction).

(you may choose to use regular blender drivers… this is just personal preference it would be easy enough to control the brush objects via drivers based on extreme bone poses…)

sorry for spelling / grammar / layout as i was posting this on my phone while “watching” xfactor -_-

Thanks for explaining this Indy_logic!!! I’ll be taking a closer look at stress maps…

Randy

Would really be nice to see this in a blender game.