Hey,
So recently I’ve been messing around with shader nodes, and have been wondering about the ability to have a depth texture excluding certain scene objects, which can be done by hiding objects, updating the render to texture and showing them again. This works great for rendering the scene colors normally, but however, I can’t do the same with rendering depth.
I tried render to texture zbuff = True attribute but the output is clamped between 0 and 1, meaning that I can’t use it for objects more than 1 blender unit away! I tried depth attr, but it’s not texture friendly and shows as a messed up texture.
My question, how to store that array from depth attr into a texture divided by 50 or something so that it’s texture shows objects 50 blender units away? or preferably a type of texture without a clamp?
This would be useful to make a watershaders. Or maybe - look for it into martinsh water shader? It uses something to calculate depth, I’m not sure what, but this might be it!
As far as I’m aware the depth texture is in normalised values, so it will be constrained between 0 and 1. Furthermore, it’s non-linear. So without linearising it, it doesn’t have any direct relation to the amount of blender units. Here’s some links on it:
When you linearise the depth value its usually in relation to the camera near and far distances, so 0.5 becomes half the view distance.
Generally speaking, there aren’t many BGE GLSL tutorials out there, you’re better off just searching for general GLSL articles as the code translates over to GLSL in blender fairly easily.
I can linearize depth “arrays” in GLSL, the problem is that it doesn’t come off as an array when passed as a texture via render to texture, and the only solution is the depth attr which apparently returns an array but I’m not sure how to linearize that and make it texture friendly within Python.
I’m not entirely sure what you’re up to (I’m really tired). I’ve never passed things back to python from GLSL. You got a .blend I could have a tinker with? I got friday afternoon off so can have a play then.
The KeyError exception is raised when the game property doesn’t exist. It’s a way of doing initialisation (in this case, not the only reason to catch such exceptions) in the same function as the update code.
It wasn’t quite initialised correctly as I rushed to finish it before going to sleep. Even so, it’s very slow when working properly because of the fact that it’s happening in Python. If you used a C module, you could improve the speed further.s
Here’s a C++ extension module (using Boost Python) to make something that’s almost performant.
Unfortunately, bgl.Buffer doesn’t support the memoryview protocol, so we have to convert it to and from a PyList, which slows things down the most.
// depth_to_linear_rgb.cpp : Defines the exported functions for the DLL application.
//
#include "stdafx.h"
#include <boost/python.hpp>
char const* greet()
{
return "hello, world";
}
PyObject* convert(PyObject* depth_list, double near_plane, double far_plane)
{
double far_add_near = far_plane + near_plane;
double far_sub_near = far_plane - near_plane;
double double_near_255 = near_plane * 2 * 255;
int length = PyObject_Length(depth_list);
PyObject* result_list = PyList_New(length * 3);
int i = 0;
int new_i = 0;
for (; i < length; ++i)
{
// Convert from python
PyObject* py_depth = PyList_GET_ITEM(depth_list, i);
double depth = PyFloat_AS_DOUBLE(py_depth);
// Convert to integer linear
int linear_depth = (int)(double_near_255 / (far_add_near - depth * far_sub_near));
PyObject* py_linear_depth = PyLong_FromLong(linear_depth);
PyList_SET_ITEM(result_list, new_i++, py_linear_depth);
PyList_SET_ITEM(result_list, new_i++, py_linear_depth);
PyList_SET_ITEM(result_list, new_i++, py_linear_depth);
Py_INCREF(py_linear_depth);
Py_INCREF(py_linear_depth);
}
return result_list;
}
BOOST_PYTHON_MODULE(depth_to_linear_rgb)
{
using namespace boost::python;
def("greet", greet);
def("convert", convert);
}
Here’s a Windows 64bit Python 3.4 dll (pyd) that you can try (if you have the same specs):Release.zip (259 KB)
Great work, though it’s too slow of a process on the CPU, I’ll give compute shaders with pyOGL to pass the array an attempt, hopefully will be faster than the current CPU implementation. Thanks for your help!
I know this thread is some months old and I’m not sure that I understand the problem right but I also have searched for a way to get the z buffer of a camera. I have noticed that when I use an perspective camera the render to texture (zbuff = True) doesn’t give me values that I can use to work with the distances.
But when I use an orthographic camera it works fine and the values represent the depth between the start and end camera clipping.
Maybe thats helpful for some of you
Yeah, it’s not possible to “cleanly” pass the depth array to GPU for normalisation, and normalising on CPU is really slow. There might be a chance in somehow finding the rendered texture from GPU memory though, except that I forgot what I needed this for