Render to texture depth

Hey,
So recently I’ve been messing around with shader nodes, and have been wondering about the ability to have a depth texture excluding certain scene objects, which can be done by hiding objects, updating the render to texture and showing them again. This works great for rendering the scene colors normally, but however, I can’t do the same with rendering depth.

I tried render to texture zbuff = True attribute but the output is clamped between 0 and 1, meaning that I can’t use it for objects more than 1 blender unit away! I tried depth attr, but it’s not texture friendly and shows as a messed up texture.

My question, how to store that array from depth attr into a texture divided by 50 or something so that it’s texture shows objects 50 blender units away? or preferably a type of texture without a clamp?

This would be useful to make a watershaders. Or maybe - look for it into martinsh water shader? It uses something to calculate depth, I’m not sure what, but this might be it!

Martinsh’s water shader itself has artifacts at contact with water due to lack of this.

Yes it’s a bit complex. In the API, there is depth attribute for rendered textures: http://www.blender.org/api/blender_python_api_2_74_5/bge.texture.html?highlight=depth#bge.texture.ImageRender.depth

but I don’t know how to use it… Sorry

This question made me do this:

Because it reminded 2 things:
-render to texture
-depth

How’s this related? I’m trying to get camera depth buffer …

Well, it is not releated. Theese 2 words inspired me to make this!

As far as I’m aware the depth texture is in normalised values, so it will be constrained between 0 and 1. Furthermore, it’s non-linear. So without linearising it, it doesn’t have any direct relation to the amount of blender units. Here’s some links on it:

http://www.roxlu.com/2014/036/rendering-the-depth-buffer

http://www.geeks3d.com/20091216/geexlab-how-to-visualize-the-depth-buffer-in-glsl/

When you linearise the depth value its usually in relation to the camera near and far distances, so 0.5 becomes half the view distance.

Generally speaking, there aren’t many BGE GLSL tutorials out there, you’re better off just searching for general GLSL articles as the code translates over to GLSL in blender fairly easily.

I can linearize depth “arrays” in GLSL, the problem is that it doesn’t come off as an array when passed as a texture via render to texture, and the only solution is the depth attr which apparently returns an array but I’m not sure how to linearize that and make it texture friendly within Python.

I’m not entirely sure what you’re up to (I’m really tired). I’ve never passed things back to python from GLSL. You got a .blend I could have a tinker with? I got friday afternoon off so can have a play then.

Anyone?
Here’s a bit of a summary, I want to get linearised depth buffer into a texture, not a GLSL shader overriding the material.

Something like this?
cam.blend (823 KB)

Thanks agoose77 for sharing. It can be usefull in many shaders. I’d like to understand but it’s again too advanced for me.

Amazing, Agoose! Will spend some time understanding it, thanks!

Edit: what’s the importance of “KeyError” in the except statement?

Edit2: The rendered image seems to be static, even when enabling positive ticks making the framerate 0.4, any thoughts?

The KeyError exception is raised when the game property doesn’t exist. It’s a way of doing initialisation (in this case, not the only reason to catch such exceptions) in the same function as the update code.

It wasn’t quite initialised correctly as I rushed to finish it before going to sleep. Even so, it’s very slow when working properly because of the fact that it’s happening in Python. If you used a C module, you could improve the speed further.s

Here’s a C++ extension module (using Boost Python) to make something that’s almost performant.
Unfortunately, bgl.Buffer doesn’t support the memoryview protocol, so we have to convert it to and from a PyList, which slows things down the most.


// depth_to_linear_rgb.cpp : Defines the exported functions for the DLL application.
//


#include "stdafx.h"
#include <boost/python.hpp>


char const* greet()
{
    return "hello, world";
}




PyObject* convert(PyObject* depth_list, double near_plane, double far_plane)
{
    double far_add_near = far_plane + near_plane;
    double far_sub_near = far_plane - near_plane;
    double double_near_255 = near_plane * 2 * 255;


    int length = PyObject_Length(depth_list);
    PyObject* result_list = PyList_New(length * 3);


    int i = 0;
    int new_i = 0;


    for (; i < length; ++i)
    {
        // Convert from python
        PyObject* py_depth = PyList_GET_ITEM(depth_list, i);
        double depth = PyFloat_AS_DOUBLE(py_depth);


        // Convert to integer linear
        int linear_depth = (int)(double_near_255 / (far_add_near - depth * far_sub_near));
        PyObject* py_linear_depth = PyLong_FromLong(linear_depth);


        PyList_SET_ITEM(result_list, new_i++, py_linear_depth);
        PyList_SET_ITEM(result_list, new_i++, py_linear_depth);
        PyList_SET_ITEM(result_list, new_i++, py_linear_depth);


        Py_INCREF(py_linear_depth);
        Py_INCREF(py_linear_depth);


    }


    return result_list;


}




BOOST_PYTHON_MODULE(depth_to_linear_rgb)
{
    using namespace boost::python;
    def("greet", greet);
    def("convert", convert);
}

Here’s a Windows 64bit Python 3.4 dll (pyd) that you can try (if you have the same specs):Release.zip (259 KB)

In the physics tab you can set a mesh to act as an Occluder. Maybe that one does the trick.

Great work, though it’s too slow of a process on the CPU, I’ll give compute shaders with pyOGL to pass the array an attempt, hopefully will be faster than the current CPU implementation. Thanks for your help!

Do you mean moving it from CPU to GPU? My CPU renders 4x faster than GPU in cycles, actually CPU of me is much, much stronger than GPU :smiley:

I know this thread is some months old and I’m not sure that I understand the problem right but I also have searched for a way to get the z buffer of a camera. I have noticed that when I use an perspective camera the render to texture (zbuff = True) doesn’t give me values that I can use to work with the distances.

But when I use an orthographic camera it works fine and the values represent the depth between the start and end camera clipping.
Maybe thats helpful for some of you :slight_smile:

Yeah, it’s not possible to “cleanly” pass the depth array to GPU for normalisation, and normalising on CPU is really slow. There might be a chance in somehow finding the rendered texture from GPU memory though, except that I forgot what I needed this for :stuck_out_tongue: