The Great Enigma: Linear Workflow

I’ve always heard linear workflow talked about in those “how to render better” articles as if it were a quality booster on par with HDRi and IES profile lights. This of course sounded like something I needed to use as a magic wand to make my renders look perfect (as usually isnt the case for most renders out there because there is no magic wand). However, when I went to go consult the smartest Internet entity cough cough google cough on “how to use linear workflow in blender”, I got hobbyist blogs and a handful of papers from Stanford university filled with CIE 1931 chromaticity diagrams, gibberish about parabolas and locuses, and scary scary math. For a while, I didn’t even know what “linear workflow” meant other than the fact that it involved using a linear color space. Yeah, you probably knew where this was going. I’m wondering if anyone can either explain (in simple terms for not-so-smart people like me) what linear workflow is and how to use it in blender, or just post links to tutorials or articles on what it is and how to use it in blender. Any feedback on this confuzzling topic would be appreciated greatly. Cheers!

1 Like

Linear workflow is… complicated.
In short (and sorry - highly simplified): It deals with the fact that computer monitors don’t display color correctly. Because of that, images for display on a computer screen have a “gamma correction” applied to their pixel data so that they’re displayed correctly on the monitor. This gamma curve makes the image brighter and compresses the darker parts, which results in a loss of detail especially in the dark sections.

Now let’s think of the following scenario: You apply an image texture to your model. This image texture has that gamma correction built in. You render your image and put that out as an image file. That rendered image is supposed to be displayed on a monitor - so it gets that gamma correction, too. As a result, that texture image data has in fact been gamma corrected twice, resulting in a pale and washed out texture.

Linear workflow means that (based on the fact that the renderer works in floating point – linear – space under the hood), all image data that is fed into the renderer has to be linearized as well. So, to stay within the example from above, that texture image we applied to our object will first be “de-gamma-ed”, the gamma correction it comes with is removed. Then the image is rendered and finally for the output on the screen a gamma correction is applied. Thereby the texture’s original gamma correction is restored and it is ensured that no double gamma correction is applied.

The good news is: In current versions of Blender linear workflow is more or less “built in” and you don’t have to bother much.

8-bit texture images e. g. are almost always in sRGB color space, so Cycles linearizes them by default on import. Viewport, rendered view, compositor backdrop and color swatches etc. are all corrected to the selected display device on the fly.

32-bit images on the other hand are almost always in linear color space, so Cycles takes them automatically “as is”.

Only in very seldom cases in which e. g. an 8-bit texture might already be in linear color space beforehand or a 32-bit image may be in sRGB a manual correction is required in order to have those textures look correctly.

Another thing you need to do manually is to disable the linearization of textures that are rather data “disguised” as color information - such as normals map as .png files. In those cases you switch from “Color” to “Non-Color Data” in the corresponding texture node to tell Blender not to gamma correct (linearize) those textures.

1 Like

As long as you have colour management turned on, which it is, by default. :slight_smile:

EDIT - Actually, that is oversimplifying too, but keep it turned on…

So, just to check, if I rendered out an image (in cycles) with various textures, colors, shaders, etc, the output would be sRGB or linear?

AFAIK that depends on the file format you save to. Save as a 32bit file format (EXR/HDR) and it will be linear, save a 8/16bit file format and it will be sRGB.

Okay, thank you! The mystery has been solved (or at least enough for me to be competent in linear workflow)!

[NOTE: If you’ve come to this thread for a simple explanation of linear workflow, this is the end of that. If you want to read about linear files in environment textures and other fun stuff like that, keep on scrolling down. :yes:]

What about when it comes to using HDRIs to light the scene? Technically all HDRIs come in linear color space. So do you need to tell Blender not to convert it to linear space? I mean if it’s already in it? I don’t know how the de-gamma process works. If it takes image formats into account or something. Therefore, as you would with normal maps or spec maps, you need to set the image to non color data. If, on the other hand, you use an equirectangular .png or .jpg, set it to color. But I don’t know if Blender does that automatically or not.

I’m assuming you’d apply a gamma node to the environment texture if you used a .png or jpg, but I don’t know. It may automatically apply a gamma curve, but I still don’t know.

You have an option to apply the gamma curve in the scene tab under the color management menu. I tested out a bit with an hdri and a jpeg of the same hdri. If I set the jpeg to non color data the image itself became darker and more contrasted. If i used the color data option it was fine albeit less punchy since it was only an 8 bit image. So I think we should use the non color data option for hdris, just to avoid any potential surprises in the end.

From my understanding the image format is indeed all Blender has to decide whether to linearize textures or not: HDR format = take as is, LDR format = linearize automatically.

The “non-color data” option AFAIK only applies to cases in which data is encoded in color form. In a normal map very specific shades of blue/red/green mean very specific elevation directions and heights. That’s not the case with an HDRI, which is still “just” color (and brightness) information after all.

That is true, but the non color data option tells blender not to linearize or “de-gamma” the image because the image itself is already in the linear color space. And don’t forget, that what we are getting from HDRIs is lighting information which blender understands as a light source in the end, so it takes the values of pixels and then creates the color and intensity of the rays (someone else can correct me on that if I’m wrong). Anyway I don’t think it matters if we set hdris to non color or color data. I think the devs thought about that, and then decided to automatically set the option for us, based on the file format we are using, which would eliminate a lot of confusion from the community. For the future though, I’ve decided to set all my HDRIs to non color data as a precaution.

considering this post is very old, I thought it would be nice to suggest a fresher Blender resource for the ones that are still struggling to understand this confusing topic:

https://docs.blender.org/manual/en/latest/render/color_management.html

It’s simple yet very understandable.

If you feel like going deeper, Adobe also covers the subject on After Effects documentation, and I recommend reading specially it if you plan to use Ae for compositing:

https://helpx.adobe.com/after-effects/using/color-management.html#:~:text=Enable%20color%20management%20by%20specifying,from%20the%20drop-down%20list.

I hope that can make the research easier for the community. Cheers

2 Likes