Real time cubemap reflection (environment map)

It’s nice that such things are actually possible in the BGE, but to tell the truth, most of these more advanced shaders are the type that you have to start the game to actually see.

To have a truly modern workflow with advanced shading techniques requires that the custom GLSL be visible in the viewport while working on the level so as to minimize the amount of trial and error required and to uphold the WYSIWYG paradigm.

Many other game engines like Godot actually allow you to see scripted shaders without having to run the game, but the more advanced we want to make shading in the BGE, the less of the final result we actually see beforehand and you’re back to the old days before visually-based game development (tweak shader code > run > tweak shader code > run, it’s not a very fast process and it might take the fun out of game design).

Hey HG1 I love it, and have my normal map in the file and have it hooked in, but I am doing something wrong.

care to take a look at it?

I want to use this on my protagonist potentially,

(with a much lower res cube map)

also, how would you use a second thread to do this?

at 16 x 16 and every 5, it uses 5 ms logic,

I need it on another thread, or optimized somehow.

Attachments

RealtimeCubemapNodeVBPR.blend (706 KB)

There are several ways to get the normalmap working.

  1. Adding the normalmap to the vector input of each of the six textures.
  2. Generating a black whit shading and overlay it with the cubemap texture (RealtimeCubemapNodeVBPR 2.blend).
  3. Mixing a shaded material with the cubemap texture (RealtimeCubemapNode V1.2.blend).

also, how would you use a second thread to do this?

There are some examples in this forum which are showing how python treading is working in Blender.

Attachments

RealtimeCubemapNode V1.2.blend (740 KB)RealtimeCubemapNodeVBPR 2.blend (707 KB)

I wonder how you could distort the image along the normal?

So the curves on the surface stretch the reflection?

PS. Great work!

I am not 100% sure what you mean.
But if you mean the distortion on planar objects that is a known problem if you use a cubemap. If you use flat shading the vertex normals are pointing 90° from the edges which is stretching the image to much. If you use Smooth shading the normals are pointing 45° which is distorting the image.
You can solve the main distortion by adding a bevels on the edges of the cube. You can use the bevel modifier and set the with to 0.0001.

The normalmap version is amazing, but in the V1.2 I get only shadeless, colored cube and sphere- no reflections:(

On my PC the V1.2 is working. It must depend on your PC. Try to delete one of the object’s cube or sphere or move it on an other layer.
Did you see the the colors on the sphere and the UV-grid on the cube?

No I actually want to distort the cubemap, using the normals,

like imagine a mirror with bumps and dents in it,

so the cubemap is warped by the curves baked into the normal map.

OK. I think I know what you mean.

Here is the changed file.

Attachments

RealtimeCubemapNode V1.3.blend (724 KB)

HG1 you are amazing!

it looks amazing!

New version V1.4.

Changes:
Reduce seam in cubemap. For lower cubemap resolutions then 128 pixels the mapping node min / max value need to be changed in the “Cubemap Normals” group node.
Changed render code. It use now only one camera. Now the same render script can used for multiple cubemaps.
Clean up the nodes a little bit.

Attachments

RealtimeCubemapNode V1.4.blend (839 KB)

The sphere is blank, shadeless pink for me, but the cube looks good…

I am sorry I have forgotten to pack the textures.
I have changed the file. Please try it again.

Ok, so now I have a modern GPU, with Cuda, does anyone know how to thread something like this?

I have been asking around and I don’t know how…

Basically the shader is very fast. The problem is that we need to render out six textures with the videotexture module.
You can try to put run the videotexture in an own thread.

You don’t “thread” on GPUs, they automatically divide the per fragment work on separate cores.
“CUDA” is just some gpu cores dedicated for compute shaders, usually used in non gpu friendly ray tracing and massive physics calculations.

Edit: HG1, video textures are rendered on CPU ?:confused:

@BluePrintRandom. As Jackii described, with CUDA you can’t speed up the image processing. Simplified the main difference is that when you write a shader you only can process an image. You can’t get a value back form the shader (except the frame buffer RGBA). With CUDA you can make more general calculate in the graphic card and get the result back. So it is possible to use it for calculating physics, particles or armatures.

I don’t looked deep into the video texture module. But as far I know the image rendering is done in the GPU (noraml frame render). But there are a lot of other things are done in the CPU (set the ViewPort, Fog, ProjectionMatrix, ViewMatrix, ModelviewMatrix, calculate visible meshes, render buckets, render fonts). So I think threading can speed up the rendering a little bit. But I am not 100% sure because I have never tested it. Threading can cause some rendering artefacts, because the texture rendering and the game rendering will be out of sync if you don’t sync manually (semaphore).

Maybe in the far future. For the moment we are focusing on bug fixing.

HG1, could you try to lessen seams a bit? If you make it very low resolution render, the seams are quite strong!

As I remarked in post #32 you have to change the mapping node min/max value in the “Cubemap Normals” group node.