It’s nice that such things are actually possible in the BGE, but to tell the truth, most of these more advanced shaders are the type that you have to start the game to actually see.
To have a truly modern workflow with advanced shading techniques requires that the custom GLSL be visible in the viewport while working on the level so as to minimize the amount of trial and error required and to uphold the WYSIWYG paradigm.
Many other game engines like Godot actually allow you to see scripted shaders without having to run the game, but the more advanced we want to make shading in the BGE, the less of the final result we actually see beforehand and you’re back to the old days before visually-based game development (tweak shader code > run > tweak shader code > run, it’s not a very fast process and it might take the fun out of game design).
I am not 100% sure what you mean.
But if you mean the distortion on planar objects that is a known problem if you use a cubemap. If you use flat shading the vertex normals are pointing 90° from the edges which is stretching the image to much. If you use Smooth shading the normals are pointing 45° which is distorting the image.
You can solve the main distortion by adding a bevels on the edges of the cube. You can use the bevel modifier and set the with to 0.0001.
On my PC the V1.2 is working. It must depend on your PC. Try to delete one of the object’s cube or sphere or move it on an other layer.
Did you see the the colors on the sphere and the UV-grid on the cube?
Changes:
Reduce seam in cubemap. For lower cubemap resolutions then 128 pixels the mapping node min / max value need to be changed in the “Cubemap Normals” group node.
Changed render code. It use now only one camera. Now the same render script can used for multiple cubemaps.
Clean up the nodes a little bit.
Basically the shader is very fast. The problem is that we need to render out six textures with the videotexture module.
You can try to put run the videotexture in an own thread.
You don’t “thread” on GPUs, they automatically divide the per fragment work on separate cores.
“CUDA” is just some gpu cores dedicated for compute shaders, usually used in non gpu friendly ray tracing and massive physics calculations.
@BluePrintRandom. As Jackii described, with CUDA you can’t speed up the image processing. Simplified the main difference is that when you write a shader you only can process an image. You can’t get a value back form the shader (except the frame buffer RGBA). With CUDA you can make more general calculate in the graphic card and get the result back. So it is possible to use it for calculating physics, particles or armatures.
I don’t looked deep into the video texture module. But as far I know the image rendering is done in the GPU (noraml frame render). But there are a lot of other things are done in the CPU (set the ViewPort, Fog, ProjectionMatrix, ViewMatrix, ModelviewMatrix, calculate visible meshes, render buckets, render fonts). So I think threading can speed up the rendering a little bit. But I am not 100% sure because I have never tested it. Threading can cause some rendering artefacts, because the texture rendering and the game rendering will be out of sync if you don’t sync manually (semaphore).