Procedural Skin Texture

Hi there,

I’m playing around with writing a script to generate high quality (i.e. photorealistic) procedural skin texturing, suitable for use as a bump and diffuse map for cycles materials.

The core of the script is going to be generating a Voronoi texture from a set of seeds which are defined procedurally, and controlled by RGB maps, which can be generated by painting onto the model; for example wrinkle density can be controlled by pixel intensity (pale values mean spaced out wrinkles, dark values mean densely clustered wrinkles). Crucially, I think it should also be possible to use something like a normal map to record the vectors of stress lines in the skin, so that around the mouth and nose you get the voronoi cells stretching to create directed wrinkling - given an input, I know how to do that, although I suspect it will be highly counterintuitive to make the map (a bit like painting a normal map by hand). In principle it should also be possible to procedurally generate freckles, moles, scar tissue and other skin blemishes.

While my intention is to use this particularly with blender, it’s probably going to be a standalone script/program (I’m using using Pillow and Numpy extensively, but haven’t found a need for BPY yet).

My question to you guys is mostly: is there free software out there already that’s doing this kind of thing? To a lesser extent, do you think there’s any point in this? I’m likely to pursue it as a coding exercise anyway, but it’d be nice if you think it’s useful; I’m more concerned that I’m reinventing the wheel, if there’s free, opensource software that does this already.

If you have any questions about the approach, I can elaborate a bit. :slight_smile:

What you are saying sounds great.

Have few questions…

what about the other maps apart from diffuse and bump…? Will they be generated ? because sss maps play a huge role in final render quality.

Regarding normal maps, you can generate them from the sculpts. If your are aiming for photorealistic character chances are that sculpting will be done. So, you can get the wrinkles and folds from it and play with the intensity.

Will there be a way to plug in a painted map on top of the procedural stuff ?

It sounds great.
You are using pillow and numpy to create functionality, on other hand, Bpy comes in place for integration.
Bpy should be used to create the interface within blender, panels, operators and they are “independent” since you call the functionality from them.

CG-CNU - I’m not entirely sure what you mean; I’m still learning how to make a really good SSS setup, but my understanding is that if you’re using SSS you don’t need a diffuse texture (or at least, the parts of your SSS with a low radius stand in for your diffuse). I’ve basically said “diffuse texture” to stand for the RGB colour information for the surface of the shader, which would be exactly the same as an input for the color input of the SSS node. At any rate, the script will generate cells that are shaped like the skin wrinkles, and then procedurally work out an appropriate image to use for a texture, specularity, bump etc. - so if there is a need to create maps to control - say - the RGB radius, or the scale setting on the SSS node, that could in principle be worked out in the code.

Also, bear in mind that the idea here is to produce the fine skin texture on a flat image, not the large-scale skin structures that really make SSS work - there’s no way, for example, for the script to know that one part of the image is soft tissue that needs lots of scatter, or skin stretched firmly over bone than needs less scatter. That broad level of detail is probably better painted on to the model.

I’m conceiving of this as being an alternative to Z-brush style alpha mapping on a super-high resolution sculpt. This image might give you an idea roughly of what I’m thinking of: https://drive.google.com/file/d/0B9N5q1GrW8qwYjJpTGNRdkE4em8/view?usp=sharing - the bottom part of the image is filtered to exaggerate the skin texture, showing how the shapes of the skin texture distorts depending on the stress-lines in the skin (the purple lines) - I’m trying to emulate that in a procedural texture, without having to paint the super-file pore and wrinkle detail onto an extremely high-poly mesh. (That or just introduce high-frequency detail on a moderately detailed mesh). I kind of want to minimise the need for photo-sourced alpha maps and skin textures, although I understand that that might end up being wishful thinking… :slight_smile:

Again, at the moment I’m thinking of building it as a standalone script that will take a set of image files to control the density and orientation of wrinkles and blemishes - these could have been painted onto the model in vertex paint mode, or they could have been created externally (photoshop, gimp) by painting over a UV map - and output another set of image files, being high-resolution textures of the wrinkles and pores, which can then be plugged in wherever you need them - as color for a diffuse/SSS node, as a texture bumpmap, as a displacement map for a modifier, as a mask to control specularity, or whatever.

If you think this is pointless, I’d be glad to hear that too! Even if nobody has a use for this, I’m doing it as much as a coding exercise as anything else, it’d just be extra nice to think that someone might get some use out of it.

Mackraken - As I mentioned, the workflow I have in mind at the moment involves taking flat image files as an input, and returning flat image files as an output for you to plug into your materials as you see fit. If there is a robust way to integrate that into blender (lifting maps straight from the model and automatically plugging them into a material), I don’t think I’m at a level to do it yet, which is why I was thinking of doing it as a standalone app for the forseeable future.

Yes there are many ways to integrate new image filters in blender. The most basic one is the creation of an operator that modifies an existing image. You can also create a new kind of node that filters the input image. See Blender text editor’s templates for examples on how to create custom nodes (Blender -> Text Editor -> Templates -> Python -> Custom Nodes).

You can access an image’s pixel information with bpy like:
img = bpy.data.images[‘image1’]
img.pixels = [array of floats]
pixels is an array of floats which length is width * height * channels(rgb or rgba). Each pixel isnt grouped into its own array so it may be good to create a wrapper class that reads per pixel color information.

Ah! Nice. I didn’t know that. :slight_smile:

I’m reluctant to fully integrate with Blender, because it’s not very good at having live updates for scripting, and because the method I’m using for generating Voronoi seeds is going to be pretty intensive (I’m anticipating tens of minutes for a good quality texture), that would leave the user with a non-responsive program for twenty minutes. If I code a standalone program, I can have it give realtime progress updates. I’m vaguely aware that you can kind of do that in blender, but that it’s regarded as a bit hacky and frowned upon.

On the flipside, I’ve also been kind of worried about how the algorithm will cope with UV seams - if the algorithm places a voronoi seed right next to a UV seam, it’s going to be a headache to get the texture to the right position on the other side of the seam. If I can access the mesh data within blender, that may well help to get the texture to render correctly.

It also occurs to me that, while I’ve been sticking with 2D images because I’m a hobbyist scriptwriter and going to higher dimensions are likely to fry my brain completely, it might be a more effective idea to create a 3D object from vertex data and generate the texture directly on the 3D surface (as an object in Numpy rather than the native blender format) - although I don’t know if I can get a grip on non-Euclidean geometry… :stuck_out_tongue:

Another thing that just occurred to me is, if I do go down the standalone route, that it might be possible to use the unity engine for some of this, especially (once the heavy lifting building a voronoi diagram is complete) for producing a real-time preview of the textures. Given, as discussed above, that the idea is to build a base image from Voronoi cells and then adjust and distort that to create bump, spec, diffuse etc., realtime feedback of what those maps are going to look like (á la crazybump or similar) would be really useful, especially if it will be using a render engine that’s at least similar to what I’m used to in Blender. Although saying that, I have precisely zero experience with Unity (it’s on my to-do list), so that might not be practical.

Sound great. The only way, I think, to create realistic human skin textures. There’s always some baked-in lighting in photo-sourced skin textures.

Are you still working on this?

Take a look on this paper, it might give some inspiration: http://gl.ict.usc.edu/Research/microgeometry/Measurement_Based_Synthesis_of_Facial_Microgeometry.pdf

It could be possible to synthesize both the mesostructure and microstructure with voronoi + different noise functions. See how the patterns change on different areas on a face, and also how stress and shear changes the directionality of the geometry.