What is the encoding of a object space normal map?

I’m trying to get my engine to use object space normal maps instead of tangent space normal maps (to improve speed on mobile devices). The problem I’m having is that I’m not sure how the normal information is encoded in the RGB channels of the object space normal map.

For tangent space this wasn’t an issue since the bitangents/tangents were in the mesh.

I know the basic transformation of RGB * 2 - 1, but I think my axes don’t map correctly. So perhaps I just need to know the axes that is encoded.

*** Moderation ***
Moved from Game Engine Support and Discussion
Reason: not BGE specific

By trial and error I think I figured out the axis encoding. To map my left-handed y+ up system I interpret (R,G,B) as (X,-Y,-Z).

You have to take into account the Face normal, and the face Tangent vector to convert the tangent map to object or world coordinates.
This will depend if your normal vectors are related to the object space or to the world space.

Basically the tangent map is just the information on how to rotate the normal of the point being rendered. The Red channel will rotate the normal in the tangent direction (0 is to the left, 1 is to the right), and the Blue channel will rotate the normal in the perpendicular direction of the tangent vector.
Normally the tangent vector will correspond to the X axis of your UVmap (the U axis)

I believe you are described tangent space normal maps. I had those working before. I didn’t have to worry about the encoding because it directly related to the tangents provided on the vertexes. I’m using “object space” normals now, where the resulting map has the normals encoded relative to the object, not the faces. I’m just unsure of what constant axes it is using in this situation.

is there a good tut for this in cycles showing some test and vectors to better understand this

happy cl

I haven’t seen one… :frowning:
Just programming stuff, most in the hieroglyphic mathematical language that artist tend to run away from. :slight_smile:

I can try to make some illustrational shortcuts into the matter, when I have more time… maybe in the weekend… But cannot promiss anything.

I think saw some external tut but not for cycles
so would be nice to see something for cycles if possible with example

like don’t remember is it possible to bake either type of normal map
or only use the 2 type of normal map?

thanks

Cycles, internally, only uses normals in world coordinates (dued to the BVH way of working). But it has the Bump node that translate heightmaps to derivates and apply them to the normals… and it has the NormalMap that transform tangent normals to world vectors…

there could be more builted-in options to these transformations…but at least the tools are availlable for us to make them. :slight_smile:

I was wrong, I don’t know the axes yet. I got a second texture and it’s axes appear to be different from the first – the artist is using the same export options.

is it possible for you to post the texture, and the model? And can you also describe the specifications of your game engine?

These are the object space textures: https://imgur.com/ufiVCMv,3nfj8tk

This is one of the models, but I only have the FBX export: Target Combo - 3D model by mortoray - Sketchfab

My game engine is custom, so I am writing the lighting myself.

Is there not a specification of how object space normals are generated/exported? In which axes? I suspect in my second case there model has just had an axes inverted, and the texture not.

I can’t see the model (my computer is too old to work with sketchfab or any webgl website), but by the texture, it looks like it’s coded correctly (with unsigned values).

I can’t figure exactly what’s happening, as I’m a little blind from this side, but some points must be taken in consideration:
-Are you converting the RGB to [-1,1] scale? (this means (ColorByte/127)-1)
-Are you interpolating the normals of your faces?
-Are you applying the object transformation matrix to the Normals?

I have one texture working, which would imply that the rest of the chain is working – application of transformation matrix. I don’t need to apply the face normals since that is already encoded in the object space normal map.

The problem I have is that the second texture appears to be exported differntly. Instead of doing:

RGB => X, -Z, -Y

I had to do:

RGB => X, -Y, Z

This is why I’m asking about what Blender is actually doing. It’s possible that a model has a different axes, that the map does, or something else. All I need is a consistent system to work from.

I never had used the FBX exporter, but I see now that it has the option of changing the forward_axis and the up_axis…maybe the artist had messed with this, while exporting. (dunno if that influence the texture outcome :confused:)