Blender PBR viewport Branch v0.2

does this work well with substance painter, i would like to transfer textures from painter to blender and see it as accurate as possible to what I see in painter

Yes, as long as your node groups are set up right. You can even use painter’s environments.

NICE! Can’t wait until this stuff is added into the trunk!

Awesome! But its performing really slow.

That’s what worries me,

the original creator says it’s too “hackey” to get into trunk…

and it kinda shows
unfortunately, this too will be another great feature that won’t get into trunk, right next to Adaptive sampling

On the bright side the Blender foundation says that in 2.8, they are coming out with their own PBR so,
yeah, for now let’s use this

How do you know this won’t become the base for the BF’s PBR engine in 2.8?

Mike Erwin has stated in the other thread that he has an interest in this project (and he’s the one doing much of the OpenGL/modernization work right now).

Even if they don’t use this base, they can see how it works and make a better version with no tricks.

I must point out the only ugly thing I do is sampling the EnvMap inside the envmap texture node.

Once I, or someone else, figure how to create cubemaps from the world node tree i’ll sample this one instead. Maybe some recoding will be needed to make it acceptable inside the trunk but the base is relativly clean and well integrated compared to the previous version 0.1.

On some assets it’s really slow, more than 30 secondes beteween each new settings.

I would assume it is not optimized and I am curious if maybe also blenders slow viewport code adds to this.

I think a lot of the delay in the results might be due to lack of optimization in the compiling process, not the actual rendering (a lot of shaders along with complex shaders might mean a lot of code that has to be generated for the GPU).

To alleviate this, it might be worth looking into more or less having the same constant folding optimizations that reduced the compile times of some Cycles materials from many seconds to just a fraction of a second. There might be other ways as well but that’s the main idea that comes to mind.

I would guess an optimized ueber shader will be more helpful for viewport PBR

Using a more recent version of OpenGL can improve the viewport performance ?

That’s right, the main slowdown comes from the shader being recompiled each time some parameter changes. And my branch adds loads of code to compile.

Unreal engine suffer the same problem. But changing a color for instance should not recompile the shader IMO. Using a color ramp and modifying it does not recompile the shader actualy, it uses texture update to be dynamic. So maybe something like updating uniforms instead of recompiling everything would be nice. Only recompile on nodelinking.

In the meantime you can use color ramp everywhere. :smiley:

For me it’s slow when I use lamps, change the intensity etc.

Same applies for lamps. The color of the lamp should not recompile the shaders but other parameters are not plugged the same way.
Each lamp of your SCENE (not only visible layers) Is plugged inside the shader.

In blender internal the shaders comes with all inputs linked dynamicaly. I need to do the same with cycles.

Changing a lamp nodetree will indeed recompile every shaders in the scene.

Edit :

So my recommendation for fast scene editing are :

  • Use fewer shaders as possible
  • Use color ramp to adjust values/colors instead of default inputs.
  • Use lamp nodetree only if necessary
  • Use only one Envmap.

a link to Linux?

Cycles shaders aren’t compiled (unless you are talking about OSL), they are interpreted - that’s why updates are fast. The constant folding is supposed to reduce the work the interpreter would have to do, every time the shader is evaluated. The GLSL compiler of your graphics driver already does constant folding. And herein lies the one of the problems with the slow compilation: You are at the mercy of your GLSL compiler. From a developer’s perspective, it’s a black box. Vulkan is going to give developers a little bit more control over compilation by letting them supply SPIR instead of plaintext code.

For good performance, you want most of your shader parameters to be constant, but that also means you need to recompile the shader whenever you change a parameter.

For fast updates, you want to use uniforms (variables which you can set every time you draw the mesh), which limits the amount of possible optimizations. With more complex shaders, you may run into the limit on the amount of uniforms you can use (different GPUs also have different limits).

As you can see, those are conflicting goals.

More complex shaders are going to take longer to compile, or may fail to compile at all. It’s not unusual for shader compilation to take several seconds. Even if code generation was optimized for faster updates, changing the structure of the graph will need a recompile. Keep your shaders simple.

I was waiting for a proper GLSL render engine for Blender. Some effects these days can be done in near realtime as OpenGL/DirectX shaders on the GPU, I never saw the need to use raytrace engine to render everything when some things can be rendered maybe even 10x faster this way.

So we will be waiting for shaders to compile 1 time rather than waiting for each frame to render :smiley:

I like once compared to every…