Understanding Color Space in Blender

Hello everyone.

I am a bit confused regarding the color space in Blender; I didn’t understood how to manage a file which is 14bpc linear, 16bpc linear, 16bpc float, etc. when I choose as output 16bpc linear or 16bpc float.

Inside the compositor I see numbers X.xxx (like in the input Value or RGB node), even tough as output I choose a JPEG2000 with 16bpc.

I need to know how to precisely choose a value (for now only in the 16bpc linear color space) because I need to generate some test patterns.

Also, how I can incorporate a 3D LUT or ASC CDL (American Society of Cinematographers Color Decision List) in the compositor.

It is either for testing purposes, and also to enstablsih a simple and opensource workflow for proposing Blender to photographers and cinematographers ( I want to allow people to migrate from Photoshop Camera RAW / Lightroom and DaVinci Resolve to Blender).

I looked deeply in the documentation but I didn’t understood how to do that.

Now I’m searching trhough MANGO’s documentation and discussions.

Thanks.

Internally everything is 32bpc float linear (I think it’s stuck as sRGB too?). Output format is just that, output. It’s generated on the fly from the final 32bpc float render buffer when writing the images. So it just applies gamma correction and format converts when you save. That also means everything gets linearized and converted to 32bpc float on input too, since the compositor doesn’t work on anything else.

For LUT setup, you might find this helpful? http://wiki.blender.org/index.php/User:Sobotka/Color_Management/Calibration_and_Profiling#Create_a_3D_Display_Profile_LUT

You didn’t explained yourself.

However, even tough all calculations takes place in 32bpc float (and you said float linear), the conversion between color spaces are important.

And I can’t input a color valure which belongs to 16bpc linear, because the smallest value is 0.001. I tryed yo select ‘None’ under DIsplay Device in the Color management tab (Scene) and it allows to have more zeros, but I can’t reach 0.0000152587890625

I’m still confused.

The file format loaded, as stated above, is always taken to a 32 bit float reference space in the compositor. It would be resaved back to whatever format the file format dictates.

See above. FIle formats don’t have any impact on how the data is stored in the internal buffers and mixed in the reference space.

Linear is not a colour space. The folks that have said that it is were incorrect.

The CDL is already in the colour correct node. The CDL however, does nothing to ensure that your pipeline is correct, so you would have to assert several things before you could use it and expect similar output across different contexts. What is the colour space of the reference space you are coming from etc.

Regarding 3D LUTs, they can, as stated above by J, can be added to your viewing transform chain with some modification of the OpenColorIO environment files.

One’s knowledge of colour pipelines needs to be rather tight to begin handling colour space transforms. It can be accomplished, however.

There aren’t too many folks around Blender that understand the nuances of colour spaces, and as such, there is little to no documentation on it.

Don’t bother.

The handling of colour spaces is via the OpenColourIO handling and how the architecture of Blender was designed. It can be controlled more precisely and with different configurations by adjusting the environment files.

There is no such thing as a colour value that is 16bpc linear. Or rather, there are an infinite number of 16bpc RGB values that could be linearized to an infinite number of RGB colour spaces. Linear, again, is not a colour space. Please focus on that statement before proceeding.

Linear is not a colour space.

Useful questions that are unanswered thus far are:

  • What type of test chart are you trying to create?
  • Why?
  • What is the purpose of the test chart in the pipeline?
  • What is the pipeline in this instance, what what specifically, is it attempting to solve within the contexts of this problem?

Etc.

With respect,
TJS

What type of test chart are you trying to create?

A chart with some color patches, a dynamic range grayscale, and human female face (I will put an HDR shot for the face), with a bit of noise, and bayered if possible.
So I could undertake tests of processing of a 16bpc raw image.

Why?

Because I want to do research on processing 16bpc raw images, in the near future I will start to do that on FPGAs, and also to use Blender instead of proprietary software to do the color grading (and final rendering onto a 8bpc image) of digitized films, because I will shoot an open-movie (action short) on 16mm film, and I want to release everything on github, so a compelte 100% opensource pipeline is necessary.

What is the purpose of the test chart in the pipeline?

For testing purposes !!! :smiley: Like the Kodak LAD test… also if someone who wants to test an hardware, without purchasing the costly sensor, he could use those test images to do prcess on the hardware and see if it works and how much time it takes (like me).

What is the pipeline in this instance, what what specifically, is it attempting to solve within the contexts of this problem?

i spent the last 3 years studying in deep cinematography, digital cinematography and even shooting film, to reach an understand of the latitude and dynamic range. This field is snobbed by almost everyone, expecially the new guys (who wasn’t a cinematographer at the times of film) and the market (the market try to sell high pixel rate and high fps, while the most important things are latitude and dynamic range, and even the shutter type…).
I need to pursue on those studies and let people know.

The main thing is that I want to shoot on film, the Kodak Vision3 films, and then digitize the negative and color grade. Usually this process is done with heavy-cost film scanners, and then the olor grading is done at least on DaVinci Resolve (I told at least because it is the very basis, and is closed-source; the common way in cinema is to use machines, the DaVinci Resolve hardware and noy just the software).

My intention is to get rid of closed-source things and allow people to learn and complete some production tasks even when they doesn’t have a budget.

With respect, you really engaged me in the discussion, thank you so much.

Scipione

This is very easy to accomplish, but will require you to learn a few tools. I’d start by suggesting you acquaint yourself with OpenImageIO and OpenColorIO to start. If you can compile them yourself, and cyclically compile them, between the two there are enough tools in the source distributions to craft precisely what you want. The key is to note exactly what format your source images are in, and make sure the OpenColorIO configurations are set to properly transform them.

Grading is a colossal can of worms, which likely will involve extending your knowledge base a little bit. At risk of speculating wildly about your background, I’d strongly encourage you to read Jeremy Selan’s document on colour that became the Visual Effects Society reference paper. http://cinematiccolor.com/

Pay close attention to the scene referred versus display referred differences here, because it has a tremendous impact on attempting to encode scene referred data into containers. Not all containers can encode it, nor encode it well, let alone being able to decode it correctly. Grading’s entry point and pipeline will be radically different if you choose to remain in a scene referred reference versus a display referred reference.

This sort of stuff is a bit of a fool’s errand. Sensors and their behaviour are messy because they are effectively “baking” spectral data into a tristimulus encoded schema. Going from one model to another incurs a degree of irreversibility, and therefore your predictability is going to be littered with issues.

There are a good number of folks that already know, and know stuff that would make your head explode. They generally hang out in visual effects circles and idle on lists like the OpenImageIO, OpenColorIO, Academy ACES and CTL lists. :wink:

DaVinci and Filmlight are the reference standards.

I’d suggest that perhaps you skip a generational loss and shoot on a digital camera that supports a low level log curve. This brings with it the perceptual encoding compression that film has, while sacrificing data granularity that a purely linear sensor offers. They do this by low level voltage trickery and other alchemy, but the net sum is that you get a batch of data with a log-like curve to it, and a large latitude compared to sensors that are set to a purely linear recording.

Sony offers a few, that are affordable for some and rentable for others.

From there, you can take the data and transform it into an agnostic colour space of your choosing and lay film-like 3D LUTs atop of it that carefully model film’s crosstalk and intensity curves.

Just a thought, but I’m reasonably sure you will find it quite rewarding, as well as skipping all the silly bits of needing a Domino printer etc.

You aren’t going to be able to escape this on some level. For example, that Sony camera that shoots to SLog? Yes, it has a proprietary software wedge to get the content out. Arri too. As does every other vendor.

Once you are in the digital domain however, you are rather fortunate to be surrounded by visual effects companies that have open sourced a good chunk of their lower level pipeline bits so that you can follow those flows.

You might be interested in following the Apertus project, which seeks to make a camera free of the above caveats, but it too would come with design caveats.

Feel free to email me directly if you feel you have a question or two that you don’t want to ask publicly. I’ll do my best to answer.

With respect,
TJS

Some very interesting reading in this thread!

Regarding Blenders internal color space I believe it uses rec709/sRGB primaries and white point. You will get a hint for it in color space comboboxes where “Linear” is described as “Rec.709 (Full Range) Blender native linear space”. Everything else that Blender understands (ACES, XYZ) will be transformed to this.

It would be possible to work in any other color space also by reading everything in as raw, doing your own linearisations etc and using proper viewer LUTs. Color pickers should be color space dumb, you get exactly what you punch in.

Instead of multiplying with 0.0000152587890625 divide with 65536.

There is a conversion of the inputs to a single float-binary value … XXX a tuple of floating-point values XXX … which is what passes through the entire pipeline before being translated back to the output.

Mathematically speaking, even if they present in the input-file as an n-tuple, they do not pass entirely through Blender’s pipeline as such an n-tuple. (That is to say, as the same tuple.) Therefore, the mathematical outcome might not be exactly the same as it would be, if it had been.

While this is acceptable for most projects, particularly given the compensation techniques described in earlier posts, it might not be appropriate for a project of your stated rigor. If your goal is to produce a reference chart (file …), these mathematical differences might be deal-breakers against the use of this pipeline.

I have no idea what you are on about.

If you learn a little about how colour / pixels work, and how OCIO was designed to be a no-op regarding reversible 1D transforms, all of this is just rubbish.

Kindly elaborate, please, regarding what is your exact point here, instead of dwelling upon what it is “I” might or might not “know.” You say that, “OCIO was designed to be a no-op regarding reversible 1D transforms?” What do you mean. (Or, cite some web-pages concerning what you mean.) Instead of sweeping my comment under a rug (without further commentary) as “just rubbish,” where and in what way did my comment so-frabjously err?

(And, hey, “Lawd knows I’ve said plenty o’ rubbish before.” I’m not offended. In fact, I’m all ears. And plenty of others might be, too.)