Black Magic Cinema Camera and Blender Color Space Help

Hey guys,

I need some help understanding things about color space. I bought a Black Magic Cinema Camera for film making. I record things in Prores HQ using film mode for the 13 stops of dynamic range. It’s amazing the amount of detail the camera records that I can bring out in color grading in normal video editing.

The issue though when it comes time for visual effects is the log color space of the footage. Working with log is very new to me. Normally in the past I have shot with DLSRs. I see in a lot of other forums people like to convert log to linear then work with effects but I’m not sure I have my head completely wrapped around this. To me obviously the footage is to flat to do anything major like get accurate camera tracking or try to light a 3D scene. Or is it? My question is what is the best work flow to work with log video from this camera in Blender? :confused:

You’re going to love that camera :slight_smile: I had the the bmpcc once and was very fun to shoot with.
Regarding your questions though, it depends what you’re going to be doing with it in blender.
Are you going to be using it for camera tracking? Compositing(other better free tools for film work)? Using frames or panoramas for IBL?
Please expand on your goals :slight_smile:

Well I kinda have an odd set up. I’m running a Linux machine (Fedora) and trying to be able to do everything on it. But my use of Blender would be mostly towards dabbling in a little of everything it can do for whatever I need in my shot whether it’s simply adding sky scrapers or UFO in the background with camera tracking and Cycles or giving a shot a total CG overhaul. then rendering and importing into my video editor. I’m just unsure as to how I should handle log footage in Blender as compared to linear.

You should find out how the Tears of Steel folks worked with their footage. It was shot with a sony F-65.
You can import your video into blender and just change the input color space. sRGB should give some contrast then you can further adjust it in the colors parameters.
Do you want to add a LUT in blender? Not sure if thats possible.
Preprocessing it to add more contrast is a good idea for tracking. Personally, I dont find myself needing to adjust colors of plates inside blender. I do all of that in Nuke or AE.

This might help:- https://mango.blender.org/uncategorized/color-spaces/

You can. Blender’s colour management uses LUTs and you can add custom ones to the config.ocio in 2.74/datafiles/colormanagement

Adding LUT’s, Now that’s cool :cool: I guess I’ll be playing around with that for a while and see what I can do.

Thanks guys.

The official Blender release doesn’t have a preset for the Blackmagic film curve, at least that I’ve ever seen. The most straightforward thing to do would be to pipe the Prores files into Davinci Resolve (which you probably got with the camera) and convert the footage to linear RGB (that would retain the most dynamic range and image quality). You can then export linear EXR files to use in Blender. That’s close to the same process they used for Tears of Steel.

I haven’t seen a specification on the encoding format for the YCbCr encoded ProRes files, and without that, we are somewhat shooting in the dark.

If you try to load the ProRes codec stream, you will almost certainly be downgraded to an 8 bit stream and it will likely be incorrectly decoded. This is typical across many different applications as decoding and encoding is a rather complex set of operations where any single operation can botch your footage.

Best practice is to dump to a raw image stream and assert that your decode went correctly.

If you use a bleeding edge FFMPEG, you can dump to a native bit depth TIFF file for example. I’ll go out on a limb and speculate that the ProRes stream is encoded using 709 weights. You should assert that this is the case, as if you use the wrong weights to decode the YCbCr, your footage will be dumped incorrectly. Also assert how the footage is encoded and whether or not it is broadcast or full range.

Once you have a native image format, you can ingest it into Blender. There will be two transforms that you will need to be careful with. The first is the transfer curve. This is typically a 709 curve in most footage, but in the case of the BMCC it is some form of a log curve. I’m not entirely sure which one it would be, but plausibly a Cineon log. Given that Blender’s internal reference space is linearized, the footage will be rolled inversely through the transform and you will end up with roughly linearized footage values.

The second transform is a potential color transformation based on the primaries of the camera. I’m not familiar enough to know if the color primaries are converted to sRGB / 709 primaries in the camera. If they are, you don’t need to add on this transform to your custom group transform in OCIO. If it is in any other set of primaries, you will need to transform from those primaries to sRGB / 709.

That’s the long and the short of it, but I’m a little short on accurate information to give you a definitive solution sadly.

In the end, you have a color management system in place that can let you harness the footage assuming the correct details can be gleaned from specifications and documents.

It should be noted that you can leverage existing LUTs where the reference space assumptions are identical. In the case of Blender, it evolved off of the Nuke reference space LUTs. These are available, including the Cineon log 1D LUT, at the official repository at https://github.com/imageworks/OpenColorIO-Configs

With respect,
TJS

Thank you for the tip. It took me a few tries to edit the config file and install the luts but I got it. Here’s my comparison between the original and Cineon.

Original BMCC footage:


Cineon through Blender:


I believe the BMC log is custom, so you would need the LUT from Resolve. In addition, there is another for the color primaries.

Sadly, those would only apply to the raw footage, and I am unsure how the ProRes files are encoded.