Are there different types of normal maps?

I noticed today that Blender’s (Cycles’?) normal pass is darker than what you typically see.

Is this because of bit depth or what? Are there two standards that I’m seeing?
I read something in the documentation about there being 128 and 256 bit types, but I’m not sure if that is what I’m seeing here or if it’s something else.

To illustrate what I mean, this is the color range of the “normals” matcap (OpenGL render):

And here is the output from the Normal pass:

Besides being darker and more saturated, I also noticed that it shades down to the triangles inside each quad.
I suppose this is desirable as it results in more detail?
Mostly I’m wondering about the color though.

If someone can explain a little about these differences I would love to learn about it.
Thanks in advance.

The output from the normal pass is being condensed into the sRGB color space it looks like. It should be viewed as a linear image. In all engines that I know of, the base value for normal map is 128, 128, 255 (RGB). The swizzle is different in some cases (meaning the direction/curvature that red and green represent), but the base value for a tangent space normal map should always be the value you see in the first image.

The map, although expressed as “an image file” because that’s very convenient, is actually: a rectangular grid of numbers. (Usually, the three-or-four numbers that make up the “n-tuple” for each point have their own meaning, e.g. axes.) The values are mapped, in some agreed-upon but not-standard way, upon the domain of values that can be stored in “an image file” of this-or-that filetype.

If you view the map “as an image,” the image-viewer (of course) has no idea that it isn’t “a gamma-corrected image,” so it dutifully tries to display it as one. It treats it as “different colors,” and it tries to apply gamma. Because the data is linear … “gamma” actually has no relevance here … you’ll see differences in shading, and those differences will vary according to how “bright” a particular spot may be. You might see completely-different coloration (caused by different interpretations of R vs. G. vs. B). But, all of these things are merely curious artifacts that have nothing to do with the data.

Renderers, even Cycles vs. BI, don’t promise that they interpret a given “file of n-tuples” in exactly the same way. They’ve got their own idiosyncracies for what they produce, and they expect to consume what they produce. You might have to change the range and/or the scale of the individual values, or even transpose them, to make apples work with oranges, but it could be done.

Thank you for that detailed explanation!
I am surprised that each render engine interprets normal maps in its own way (if I am understanding you correctly).
I would expect this to cause problems when exchanging models and textures between different software. But I guess most artists are generating their normal maps with the same render engine they are making their final renders in so it’s a non-issue for them. Or if it is ever an issue they make adjustments to the image files to get similar color values. Although if the hue is the most important value and gamma variations only cause subtle changes then variations may not be so noticeable.

You might see completely-different coloration (caused by different interpretations of R vs. G. vs. B).

For any curious readers, note that Blender Internal’s normal pass uses a completely different RGB schema:
[ATTACH=CONFIG]342721[/ATTACH]

From what i’ve seen, first image looks like correct normalmap values vise, second does have some step added or missing during conversions. What is not correct in both - they were baked from not smoothed before objects.
Besides this - tangent space normalmap - exist object space normalmaps. If you see black, green, red prevailing that’s probably it. Object space normalmap is rarely used because it doesn’t allow camera travel around object - it’s works correct just from one point of view.

Simple experiment can proof that first image has correct nm RGB values - import in Gimp and filter-normalmap-convert-normalize one layer. Subtract this from another not converted layer. Result will be black image. Might be wrong here… Don’t take as granted.

Object space normalmap is rarely used because it doesn’t allow camera travel around object - it’s works correct just from one point of view.

Sorry eppo, not correct.
Object space normal maps look great, easier to handle them, to bake them. Maybe more accurate.
However, you can not change the coordinates of these faces (normals of faces).
Meaning… you can’t deform, animate such objects.
A simple example: Bake obj space normal maps of some sculpting.
Rotate the camera around and it looks great.
Rotate the object (object mode) and looks just fine.
Apply rotation now. Here we are. Boom.
Object space normal maps may be the best for static assets.

Yes, Object space normalmaps are for static only. http://oldwiki.polycount.com/NormalMap
What i’m confused with then is - rotation is relative, so that moving point of view (camera) would be the same as moving (rotating) faces with baked in different position object type normalmap. Which would be wrong for that new position (while object still could look great :)). I haven’t actually experimented with them, so…

If it is based on the world coordinates, not the camera/view coordinates then moving the camera shouldn’t make a difference.
But how do we know when we are dealing with an object space normal map vs a world space one? That’s where I’m confused again.
And does this mean a character whose mesh deforms when animated cannot have a normal map texture? That doesn’t make sense.
I thought the normal map data was calculated on top of the face normal data of the mesh. More clarification wanted here… michalis? Anyone?

And does this mean a character whose mesh deforms when animated cannot have a normal map texture? That doesn’t make sense.
I thought the normal map data was calculated on top of the face normal data of the mesh. More clarification wanted here… michalis? Anyone?

For animated deformed meshes, use tangent space normal maps.
Object space normal maps work perfectly for static meshes only. (I mean non deformed meshes. camera move or rotating the whole mesh will work perfectly. As I mentioned before, please, don’t apply the rotation - ctrl+A something, please don’t LOL)
The x-y-z coordinates of an Object space n-map refering to the x-y-z of the object.
x-y-z coordinates of a tangent space are like this: x and y are the 2d space of the 2d texture as seen in the UV editor. The z is the depth. So, the are irrelevant to any deform of the unwrapped obj faces.

Object space normal maps tend to look better, of a better quality. It’s easier to bake them, here a scenario:
a high poly mutires sculpting.
For a tangent space normal map you have to decide what will be the base level and bake from this level up. If you bake for the base mesh by example and decide to keep 2-3 more levels up as base (apply mod) the result will be un natural. (similarly for the bake to active method, where you have to create/decide how dense will be the base mesh to bake into.
When we deal with object space normal maps the only needed is to go to the higher level and bale the n-map (obj space). This map will fit to any subd level, because this is how the obj space works.
Let’s be practical, for an obj space normal map (on a multires mesh) you don’t need bake to active or from multires methods. Just go to the higher level and bake.
Tangent maps are the industry standard though. For many good reasons. To bake such maps of good quality is a small art.