data formatting of Z-buffer

I am currently trying to generate synthetic depth images using blender. I understand that current the z buffer can be output using the composition nodes into OpenEXR and I have successfully done that. How ever when I open the image in opencv and read the values of each pixel, it does not seems to correctly reflect the distance of the pixel from the camera.
I read that when Z buffer is exported into OpenEXR, each pixels contains the Z-buffer,alpha value and color render infomaitons. Is this information correct? If so how am I able to extract just the Z-buffer data?.
Thanks

the z depth is stored in blender units as floating point 1 meter = 1 2 meters = 2 etc

as much of teh scene will be further way than 1 meter what you’ll see in teh z buffer is brighter than white…

if you want to map camera depth to teh 0 to one range then use a “map range” node…

plug “z” into the “value” input
set “from min” to the near clip plane of your camera and “from max” to teh far clip plane
then plug the output value into teh z input of your composite node.

Are you sure Michael W ?

I mean, that’s how the z socket of the z pass of a render layer node works, but doesn’t the z-buffer of an image depend of the clipping value of the camera for the mapping ?
I remember a discussion about that a short while ago, and that’s what i remember.

EDIT:after some tests, it seems it is how it behaves…

@[z]en : welcome to the forum.
For your information, there is a subsection dedicated to support threads like this one.

Thanks Michael, I found out that my problem was I was trying to read the binary as integers. Didn’t realized that the depth was stored as floating point, silly my. Thanks!!

@ @Michael_W Thanks!! My problem was with me trying to read the binaries as int, didn’t realized that the depth was stored as floating point, silly me. Thanks again!!

@ Gwenouillen: Thanks! I will keep in mind the next time i post!!