I haven’t seen a specification on the encoding format for the YCbCr encoded ProRes files, and without that, we are somewhat shooting in the dark.
If you try to load the ProRes codec stream, you will almost certainly be downgraded to an 8 bit stream and it will likely be incorrectly decoded. This is typical across many different applications as decoding and encoding is a rather complex set of operations where any single operation can botch your footage.
Best practice is to dump to a raw image stream and assert that your decode went correctly.
If you use a bleeding edge FFMPEG, you can dump to a native bit depth TIFF file for example. I’ll go out on a limb and speculate that the ProRes stream is encoded using 709 weights. You should assert that this is the case, as if you use the wrong weights to decode the YCbCr, your footage will be dumped incorrectly. Also assert how the footage is encoded and whether or not it is broadcast or full range.
Once you have a native image format, you can ingest it into Blender. There will be two transforms that you will need to be careful with. The first is the transfer curve. This is typically a 709 curve in most footage, but in the case of the BMCC it is some form of a log curve. I’m not entirely sure which one it would be, but plausibly a Cineon log. Given that Blender’s internal reference space is linearized, the footage will be rolled inversely through the transform and you will end up with roughly linearized footage values.
The second transform is a potential color transformation based on the primaries of the camera. I’m not familiar enough to know if the color primaries are converted to sRGB / 709 primaries in the camera. If they are, you don’t need to add on this transform to your custom group transform in OCIO. If it is in any other set of primaries, you will need to transform from those primaries to sRGB / 709.
That’s the long and the short of it, but I’m a little short on accurate information to give you a definitive solution sadly.
In the end, you have a color management system in place that can let you harness the footage assuming the correct details can be gleaned from specifications and documents.
It should be noted that you can leverage existing LUTs where the reference space assumptions are identical. In the case of Blender, it evolved off of the Nuke reference space LUTs. These are available, including the Cineon log 1D LUT, at the official repository at https://github.com/imageworks/OpenColorIO-Configs
With respect,
TJS