highpoly to lowpoly baking for cycles renders, tests and discussions

so i managed my first representation of a highpoly dynatopo render as a lowpoly render in cycles, via normalmaps… the idea of this thread is to start from there, and try and test more possibilities to get the best possible results… so i begin with my first result:


right is the original highpoly dynatopo (1.5 mill) that carries a vertex paint which is used to drive the shader.
left is lowpoly (32k) with uv, normal map and a texture that contains the vertex paint of the highpoly dynatopo and thus can create the same shading effect.

i have shown this image in my sketchbook and michalis said and asked:

… we better discuss it a bit further.
So, you did bake the normal maps, where? In zbrush? If so what compatibility method did you use?
How did you unwrap it? How unwrapping looks like?

for this exported the dynatopo to zbrush, i did the lowpoly with zremesher, and unwrap in zbrush too. the unwrap is one big piece, with the seam at the backside. i guided the placement of seam by blue color… i imported lowpolymesh into blender. in zbrush i made a made a morphtarget for the lowpoly, then subdivided it to have more polyes than the original, and projected it. then i went down to the llowest level, switch the morphtarget back and let calculate normal map, with flip “g”. then i exported the normal map (i think with flip v) and put it into the shader in the normal sockets… for the vertexcolor, i did bake the highpoly vertex color onto a texture of the lowpoly mesh using blender internal with a white materail on the highpoly and the baking option “to texture” and checked “vertex paint”…

(michalis)

So, on the left, the low poly + normal maps.

  1. I think it is a little overdone, maybe a little less strength on the normal node?
  2. Cycles does not like low poly meshes! It is the termination issue, you see. These artifacts between light and shadow. A well known issue on many pathtracers. So, adding a subsurf (for rendering needs only) always help.
    Subsurf then, one or better two subd levels. Fine for previewing. But, 2 subd levels, an opportunity to use them for real geometry.
    Two options,
  3. you choose to export the third subd level from zbrush (is not called zero level there). Of course, you choose to bake normal maps from this level (like in blender, bake from multires technique). If you render from the first subd level, a not very natural effect will happen)
  4. Bake displacement map from the first subd level. If this gonna be applied on a 2x subsurf, this is the third subd in zbrush. Go there and bake the appropriate normal map. (from this level). Combine displ (modifier only) plus normal maps (renderer)
  1. yes i enlarged the strength of the normal map, as on 1 it was too weak. i overdid a tad, i agree.
  2. yes, the artifacts light shadow are what is annoying, would be nice to remove them. yes, i had tried with the subsurf, but then where is the point to do all the work. i mean when i apply a subsurf level2 on the 32k mesh, i have about half million polygons, it renders not faster essentially than the original dynatopo … displacement i have not tried yet, but same question, for displacement neads subsurf and level2 might be not enough, and then i would come over 1.5mill polygons…

ok, lets start discussion, and tests!

so i tried and added a subsurf level2 in blender, claculated in zbrush a displacement map, added in blender a displacement modifier with this map and rendered


we see, the quality has improved compared to the one with normal map only, but it is still not the same as the original, the crispness is missing (also i did not know which value in the displacement modifier, 1 was wayy too much, so i tried a few values until i got the “best”, but is correct? how can i know that?)… and, the render time for this was 2.01.24 while with exactely same settings and imagesie the dynatopo original renders at only 1.36.01… so speed is not gained, on the contrary…

“1 was wayy too much, so i tried a few values until i got the “best”, but is correct? how can i know that?).”

I totally agree ! I have problems setting up the displacement modifier from zbrush too… really interested if someone has a solution :slight_smile:

cool tests ! didn’t expect that displacement would be so much time consuming :S

hey wildduck, so we are two that seek solutions… i will report when my searches get more insight…

Doris, you started a thread on this. Great, I’ll follow. But not today, sorry.
One for sure, you won’t have the same original sculpting using all these maps. But, you’ll have something a little approximated. We’ll find a way, together, all the blender users, to have some miraculous results.

About displacement, what’s the point of having a similarly dense multires converted mesh in blender:
Well, it is an approximated way to import and reconstruct a multi res mesh in blender. A multires mesh can be handled very well for animation. You don’t need to see all details in viewport. But they will be rendered in BI or cycles.

Any other, more impressive, more advanced way to import a multires zbrush mesh in blender? Maintaining multires?
Imagine this: A button under multires modifier that says… “rebuild subdivisions”. Exists in zbrush already and it is really great. he he, I’m asking for this, long time now, but who’s listening?
There are more, like “vector displacement”. Well… not in blender.

Your recent posted result is more than OK. Very well done.
You know, this button “subdivide UVs” under multires or subsurf? Test it. There is exactly the same button under zbrush multires system. (Suv). Off by default there. Why? there must be a reason.

BTW, Low poly assets with the help of NormalMaps can work really work when you also support them with an AO map.
It is very well known that the AO of a render engine will not work with normal maps. Every decent app that is capable to export to a game engine, knows it. So, they also export an AO map. Most of them, they mix it with the diffuse map. (on demand)

no hurry michalis, good points already for further study the question! i try to see what suv button makes, and i try ao map, its good to understand how each can help… oh, yes, i too wish so much blender had this magic button “rebuild subdivisions” … oh yes, it would make a lot things easier…

Suv for SubdivideUVs obviously. Suv is Off, it is OK. But when you subd in blender, try Subd uv off.
If we look more carefully on your last posted renders, there is a jitter thing on lines. Maybe because of the SubdUVs?
Yes, you overdone with the strength of normal maps and/or displ maps.
Maybe, adding an AO map will make it easier for you.
Where to bake a decent AO map, good question. Difficult in zbrush, better in blender but still not there.
Waiting for cycles to be able to bake maps. Waiting, and waiting.

Waiting for cycles to be able to bake maps. Waiting, and waiting

I don’t understand why this wouldn’t be of the highest priority in cycles, next would be the ability for cycles to somehow only use GPU memory for what is actually in the cameras view allowing for larger scenes to be rendered in GPU, and a shadow catcher… we now return you to the Doris thread :slight_smile:

i tried one more, and it is improved, i believe. i like that i got some of his “presence” back… what do you think ?


how i did this one:

  1. using zbrush zremesher, made a basemesh of 1800 polygons. made uv in zbrush by guiding the seam to back. flip v coordinate of the uvmap in zbrush, exporting the mesh to blender. now in zbrush, flip v coordinate in uvmap in zbrush again. means mesh in blender and in zbrush now have same orientation of uv map.
  2. in zbrush, subdivide 2 times. get a 30k mesh. project this onto the original dynatopo sculpt. hit morphtarget. subdivide 3 more times, and project the mesh again. now go down to level 3 (that is the one where i stored the morphtarget), and switch morph target.
  3. in blender, import the 1800 basemesh, add subsurf at level 2, add shrinkwrap and project to the original dynatopo. apply everthing. now the mesh is 30k and same as in zbrush
  4. in zbrush, create normalmap from level 3 (which is same mesh now as in blender) with flip “g” activated and “adaptive on”. export the map.
    same way make a displacement map, 32bit save as exr
  5. in blender, bake vertexcolors of dynatopo into texture of 30k mesh, bake ao of dynatopo into texture of 30k mesh. now put all these maps into the shader, the displacement is plugged into the displacement entry of the outgoing node. i rendered also without this, but it is slightly better than without.

so, its now a 30k mesh with 4 maps to recreate the original: normal map, displacement map inside the shader, ao map and vertexcolors from the dynatopo as map

the squiggly lines are from that the retopo is not that good, it does not follow the geometry at these areas. i believe a very good retopo by hand would even improve that…

It looks good. Many artifacts coming from poor quality retopology though. You need more dense retopo than 1800 for this one.

the displacement is plugged into the displacement entry of the outgoing node

Why?
Doris, and all other friends. Please, do me a favor, forget this displacement socket output in cycles.
It is there, waiting for more development, it is experimental. Avoid it until the day you read, “Did you see the new awesomeness in cycles real displacement”?
Till now, It works as bumps only (but if you enable experimental in cycles properties, it will displace, or bump+displace or bump, it will subdivide the mesh to do so, but! it works with procedural textures only. Not with UV maps). After the pixar subdivisions we may see interesting implementations soon)

Why are you using bumps on the last one?
Bumps (displ) can be used for real displacement only. (modifier)
You don’t need it because you used shrinkwrap. So, you have the real displaced mesh already.

Let’s see what I think after reading your workflow.

You make a retopo in zbrush.
You export this retopo into blender. (UV unwrapping may take place in zbrush or in blender (better in blender IMO)
You enter the multires modifier, subdivided as much as needed. Shrinkwrap it on the original dyntopo.
Turn blender to blenderinternal render engine. Enter baking system.
Chose Bake from multires method.
Now let’s think about it.

  1. You already have the full dyntopo sculpting, adopted as a multires mesh.
  2. If you asked for dirt / Vertexpaint on the original dyntopo, you can bake it on a multires UV map.
    So, you can have a low poly on viewport and send the hi res data (multires modifier) to the render output only.
    Not bad at all. (in this method you are using zbrush fro retopology only.
    Optionally:
    You do similarly in zbrush, projections subdivisions etc
    You bake maps there.
    Export in blender.
    But, it seems that you didn’t quite like the displacement modifier method to convert it to multires (or subsurf).
    So, in zbrush, go to level three, bake the normal map, export this level three mesh directly into blender (+ the N_map or any other maps)
    Do not forget to disable “subd UVs” on multires or subsurf modifier panels.

I forgot to ask:
What compatibility method are you using for baking/exporting normal maps, from zbrush to blender?

ok, now you are the first to clearly state what the displace socket is doing now, and what not… ok, then you are right, its not good to use it the way i did… i will remember that…

yes, i do not like the displacement method. the reasons:

  1. i do not gain a lighter mesh,
  2. the assignment of the texture to drive this modifier is really arkward
  3. finding the correct strength setting for this modifier is a horror, you cannot see it when you use low subdivisions, and when you use high subdivisions it takes time, and i am not patient enough to “dial” a number in, that should be somehow calculated when the displacement map is created and delivered to the user…

my compatibility for exporting normal maps i baked in zbrush i described, so in short again:

since zbrush exports meshes with uv flipped vertically, i first flip inside zbrush the uv vertically and then export the mesh. after that i flip in zbrush the uv vertically again. this means the exported mesh, and the mesh inside zbrush have same uv’s. now i calculate normal map with “flip g” pressed and “adaptive on”. exporting this normal map works correct in blender.

my goal here was a little differernt. if i just want the dynatopo mesh as a multires mesh, so that i can have a ligh mesh in the viewport, i just would retopo the dynatopo, apply multires with enough levels and shrinkwrap. this way i have visually the same object, and do not need any normal maps. i did this often…

however, now my goal was, to geth the high poly mesh onto a low poly (here 30k) and ressemble the dynatopo as good as possible…

and, yes, i do agree, the retopo on this one is not good enough, zbrush cannot do good retopo with small polycount. then hand retopo would be better… but, i think my experiment showed what i wanted, did it not?

maybe you misunderstood how i used the subsurf modifier and shrinkwrap? … i used it only to get the 1800 basemesh to a 30k mesh with 2 subdivisions, (i could have exported that from zbrush) … in the viewport it looks like this, the render you see above is created by the normal, ao etc maps…


so the renders does not show a highpoly multires, but a rather lowpoly that resembles the dynatopo… i always thought this is the point of the game…or am i missing something ? as said, when i want a light mesh in the view port i just shrinkwrap the dynatopo onto a multires with enough levels so i go about the same polycount… so, i thought the normal mpas were for the purpose i tried here to bring to a good result??

Sorry I didn’t notice the invert g of rgb. You are correct.

if i just want the dynatopo mesh as a multires mesh, so that i can have a ligh mesh in the viewport, i just would retopo the dynatopo, apply multires with enough levels and shrinkwrap. this way i have visually the same object, and do not need any normal maps. i did this often…

OK, but not quite.
A good reason to convert it to a hi density multires is to bake quality normal and AO maps. (from multires). You can chose the level of subdivisions, where you want to calculate these maps actually. Only a few blender users are zbrush users too, we never forget it.
On the other hand, having a multires on the second level, adding normal maps, it is a non distractive way to setup a scene.
For instance, a museum scene. Close to camera, a full multires mesh, a few more on the background, multires at 1-2 levels, normal maps On. Such methods will become more complicated in the near future, you’ll see.

However, a really low poly + normal (and AO) maps may be your goal.
Cycles is not friendly to such solutions Doris. It was never mend to be.
Why don’t you give a try to BlenderInternal? It is a closer to the game engines style renderer.

There are free versions of some excellent game engines out there, why don’t you try one? Real time rendering, whole environments…
Cycles won’t make you happy on low poly assets.
Let’s not forget, how they work on the movies business. Ultra hi quality displacement maps are in use. We’re talking about 8k or 16 k maps. Quality render engines don’t have difficulties on rendering millions of poly. Cycles is just fine on this. Let’s forget the GPU for a moment, we’re talking about multi CPUS, render farms.

On your practical workflow, I could go for a 2-3 subdivisions export from zbrush, + normal maps. Fair enough, simple and clean.
I already mentioned. Ambient Occlusion of a renderer does not follow normal maps, right? This is an issue, this is why normal maps are always combined with a AO map. It produces more impressive effects.

no, i am not in games, i just wanted to learn this, and yes a museum scene is something i had in mind for my application of the method, like you described with the low multires in background and normalmap… my test above would be fine for such a display, in background of a render, you agree?

yes, i do agree on the ao map, my experiment has confirmed it is needed with the normalmap together, just as you said.

and again yes, i see that cycles is happier with more polygons, i aimed too low here, more polygons and better topology would have dissolved all issues…

ok, sorry, i often think not that not all blender users do have zbrush, right, then we need the highres multires to bake the maps, blender can do this, i just choose zbrush for convinience … good, this clearifies the workflow. i am glad i have learned a bit or two :slight_smile: thank you for your patience, michalis.

If the use of AO maps is among your intentions, you better follow the blender only way.
If you manage to bake interesting AO maps in zbrush, please, tell me. I’m all ears. LOL

There is another option for excellent AO maps and that is xNormal. If you have the OpenRL runtime installed then you have two options for xNormal Ao. Both give different results. Once you have this you can load them into a photo editor as layers along with Blender’s AO map and then composite them together. This method gives pretty excellent control and diversity on how the maps turn out.

Oh, I forgot, so far as I know, this only applies to Windows so you will likely have to setup bootcamp.

Although Virtual box can run xNormal fine, OpenRL does not seem to work for Lion using AMD on VB. Maybe there is hope for Mavericks since we won’t be stuck with ancient GPU drivers any more. Curse that bloody pact that AMD and Apple made regarding the locked drivers. nVidia didn’t play ball with this silly deal and their users can update drivers all they like. It’s too bad BAO2 isn’t still around, we could turn this into a conspiracy theory very easily.

EDIT: What I say below about tangent maps only holds true when baking a model from multi-res onto it’s own UV coords. Using the selected to active feature, this limitation can be overcome since the ray-tracing can stencil the changes in surface direction from a high-poly on to a tangent map. However, this leads to difficulties with getting all parts of both meshes lined up perfectly and the normals maps generated this way will likely have aliasing and some artifacts. But if you really get good at it, maybe these two point will clear up.

Doris, I would suggest that you use 3D normals maps(World/Object space), it is the only way to go. Tangent maps are super weak and are popular with game engines because they are convenient and can be generated using 2D photo editors. Also, there are many difficulties associated with trying to animate models with 3D normals maps but you do not have to deal with this issue since you are not animating your models. Definitely use the world or object space map baker settings. You will capture every last bit of detail, especially if you start with a texture that has the 32bit check-box checked off. The real big issue with tangent maps is that they don’t capture changes in surface direction. This defeats the purpose since you are trying to work with a low poly base mesh and still capture the detail of a high poly mesh.

You don’t have to use the ‘Bake from multi-res’ option for this type of normal map. The algorithm is different for baking world/object space maps and the baker captures the multi-res setting for the highest level automatically so you do not need the ‘Bake from multi-res’ checked off. You do need ‘Bake from multi-res’ for using other types of bakes but not his one.

Also there is a quirk with baking models that are roughly the default size, this issue also affects xNormal, it’s not always a problem for my particular workflow but for you and your retopo business you are probably using the “selected to active feature”. If this is what you are doing then you should scale your model in object mode up in size by a factor of 10. Do this for both models. You should do this because small models break the raytracing math somewhat. Larger sizes eliminate this issue, just be aware before hand that this will also affect procedural textures that are applied to your model. What I’ve been doing when I experiment with “selected to active” is I scale up both models then when the World Space map is baked, I hit undo a bunch of times until both models are back to normal size then I bake the procedurals. This works better than scaling the models back down because it can be a bit tricky to get the object origin and pivot options to behave properly, if you don’t get them right then when you try to reverse the scaling by 0.1 both models may not be quite synced up as well. Half a dozen undo’s will save you from concerning yourself with this.
Here is the best “proof” that I can think up for right now. I say “proof” because I’m not really a big fan of saying that something is impossible simply because I do not know how to do it, but so far as I know, nobody is able to capture the same level of detail with tangents that world/object captures with ease.

EDIT: The person below made me realize that I did not show a wireframe mesh so that you could see how simple the underlying mesh for this really is. Pay particular attention to the specular highlights across edges, tangents maps cannot do this for simple meshes, they require more complex geometry to be used, world/object space maps do not. As I mention below, this in not rendered in Blender so there is not much else I can post for this except for shader math. However, this is not a coders forum, so I won’t.

Please ignore the jibber-jabber about different tangents space methods. I was doing different algorithm tests but I should have left that part out. Pointless extra information.



Attachments


The world space mapping is repeated here for the close comparison.



As you can see, the tangent method has a very flat appearance and the edges capture no specular highlights at all. The diffuse and specular calculations are identical for all these scenes.

I have no idea what kind of mistake you made with your trees without seeing the low and high poly geometry, the maps and the scene, but object space and tangent space normal maps are pretty much equivalent when used in a synced pipeline (such as Blender’s). Please don’t go around spreading misinformation. The only difference is that you can’t deform objects that have object space normal maps (unless you also deform the normals in the shader, which few packages support).

@Piotr_Adamowicz: in the future, could you please refrain from showing up on forums to spread abuse, there is more than enough of this already, we do not need to add to this problem.

If you wanted to add something constructive to this conversation then you might have explained the math calculations behind normal and tangent math shaders or perhaps you may post screen-captures to show how the same results can be obtained in Blender by using either type of normal mapping. This would have given us something to think about and discuss.

I’ve added a wireframe render above to show what the underlying geometry looks like, since this was not rendered in Blender, there is not a scene to post other than what you already see.

Here is a short description to elaborate on the point that I was attempting to make earlier, I apologize if this sounds condescending to the professionals who read this but my intention is to help clear this issue up with people who are not at all experienced with these matters.

Now the big issue that remains to be dealt with here is the problem with some changes that have recently been made to both cycles and Blenders internal renderer. From the gossip I’ve been reading on both the patch tracker and IRC there have recently been some changes made to the baker which I believe were reverted back to the previous state. Also from what I understand, Cycles normal mapping support was changed to support a different axis configuration and that was also reverted because of the confusion it caused. Just a heads up since some builds may behave differently for the next little while. BI uses Z to represent up and down, and Y to represent near and far. For cycles, the normal map axis’ are normal.xyz. Blender uses normal.xzy

Anyways, to deal with this we can go over how to align the Y and Z axis properly using Photoshop or the GIMP to correct these problems when they rear their confusing faces. Photoshop makes it easy since it’s had an action recorder for at least a decade now, once you get the settings right you don’t have to do anything in the future except push the little play button to align everything right, then save it and load the image back into Blender.


To convert normals from BI to cycles format the Y and Z axis have to be swapped. The image below shows the Photoshop settings.


Here are the results for three cycles renders of the box with only 102 faces. I will post the settings next since there are only three images allowed per post.