Theory behind creating normalmap/heightmap from image texture

Hi everybody,

I would like to know, if it’s possible, what’s le logic behind creating a normalmap/heightmap from an image such as programs like CrazyBump, InsaneBump do.

Thanks a lot.

Jymmy097

do you mean how would you do it in blender, or why would you use one? the reason for using a normal map is to add the illusion of fine detail without adding lots of extra geometry to your mesh. the way to make one in blender involves making a UV mapped mesh, then making a denser more detailed copy, perhaps using sculpting or a displacement modifier, and then ‘baking’ the normals of the high poly mesh onto the UV coordinates of the low poly one, in the form of an image texture. if you do a search for ‘blender normals baking’ in google, you will find there is much documentation. and you can also bake displacement maps in this way. the reason for using height maps, is usually to generate a convincing terrain, which will usually need to be fairly high poly, without the need to model or sculpt it.

Modron - I read his question more as trying to understand the algorithm or process behnd taking an image and generating a normal map and other types of map from it. I assume he maybe trying to understand how to do this himself outside blender or understand if there might be a way of doing this in Blender itself ( using OSl / node groups maybe). Forgive me if I have the stick the wrong way round.

Using blender internal, you can convert B&W height maps to tangent normal maps.

Thanks for your reply.
I already know how to bake normalmaps inside blender, but i would like to know what’s the logic behind taking an image an generating the heightmap from it like CrazyBump or softwares like that. I’ve looked for the algorithm, but I have found nothing. I was courious about it because I would like to write a program to convert an image to a normalmap by passing through heightmap and I know that’s not enough to convert the image to greyscale and then use it as heightmap. I heard something about Sobel and advanced math I cannot understand deeply because I’m at the 4th year of Liceo (the school before University). I can apply to understand it, but I have not found any useful exhaustive references. If you have some, please, can you link them here?

Thanks!

Jymmy097

what’s the logic behind taking an image an generating the heightmap

Shadow information which is present in image resembles height information you are looking for. Using specially positioned light sources you could take several images, combine and get closer to proper height map. Random image -

that’s not enough to convert the image to greyscale and then use it as heightmap
, however that’s some ‘eye candy’ nevertheless.

Logic? Make audience believe it’s real. Whatever the means.

Disclaimer: not pretending to knowing High Matters or being the Final Instance.

Thanks, but if I want to convert just one image? Something like Crazybump? Are there some algorithm I can use or I just have to make the image greyscale and clamp the heights?

Jymmy097

Does look like so except what do you mean by “clamp the heights” ? I think all available grayscale range will be translated into what normalmap can handle…

There is a possible workaround. (crazybump does something more clever, have no idea how)
For example, a brick wall image
Using Pshop, gimp or blender compositor
The idea is

  1. convert image to B&W
  2. copy on a second layer
  3. Invert B&W
  4. Mix mode as color dodge
    You probably have a white image
  5. blur (gaussian) the inverted top image.
    See what I mean?

It is an old trick for a pencil sketchy effect.
Apply it as bump on a UV unwrapped object
Bake normal map (tangent) in BI. Uncheck active or multires methods.
This is the best to have correct Normal map coordinates (XYZ)

http://www.gamedev.net/topic/475213-generate-normal-map-from-heightmap-algorithm/ is maybe what you are looking for.