Measuring Noise in Cycles Renders

Here’s a simple, computationally easy, and always-working way to measure noise in renders with Cycles. I’m hoping it will get adapted into the Cycles main code.



This will DRAMATICALLY decrease rendering time because it finds areas that need to be sampled more and also for animations because you continue rendering until you hit your sampling threshold, instead of based on a constant number of render passes.

The computation takes less than 1 second of analysis for any image, and as I’ve shown it only needs to be computed for waaaaay less than 1% of the time during a render. I guess what I’m saying is NOT “noise” per se. Rather, the rate of change of pixel RGB values over time. Whether the renderer or compositor is making a “noisy” image on purpose due to complex textures, the fundamental measurement, since we have an iterative function, is the rate of change of the pixel value.

Here are the files:
http://dl.dropbox.com/u/10295145/jmil_MeasuringNoiseInCycles.zip

For example, one needs only to do a boolean subtract of one render from a previous render, convert to B&W, then analyze the histogram. As you can see from the attached files (.blend and analysis), this will ALWAYS follow a very predictable pattern.

Note also that these “booleaned” pixels actually can tell you where you should be sampling more, as these areas are going to be changing the fastest.

Hope that makes sense…

Anyway, you can see where the “noise” or the “undersampled” areas are very very easily. Look at the attached file “MeasuringNoise_008pass-004pass.png” to see what I mean. This has the 4 sample pass image boolean subtracted from the 8 sample pass image then converted to B&W.



shouldn’t this be VERY easy to make a part of the render? Maybe as an “Advanced option”?

Something like “Continue rendering until fewer than 10 pixels have a noise value under 64” as a default advanced setting.

So, if I read this right, this is a patch that adds a form of noise-aware sampling that will stop sampling regions of the image below a certain noise threshold and focus all the subsequent sampling on smaller and smaller areas until the image is practically noise free? If so, this is very good news as there’s been many times where I wish Cycles had something like that implemented already. :slight_smile:

I haven’t tried the file, but if it is indeed a patch, then perhaps create an option to only do the analysis every x-number of passes because more complex scenes may still see a bit of time to see some regions being declared noise-free and would also prevent areas at the start of the rendering being seen as noise-free because of it looking almost converged, but either incorrect or not filled with samples yet.

So, if I read this right, this is a patch that adds a form of noise-aware sampling that will stop sampling regions of the image below a certain noise threshold and focus all the subsequent sampling on smaller and smaller areas until the image is practically noise free? If so, this is very good news as there’s been many times where I wish Cycles had something like that implemented already. :slight_smile:

I haven’t tried the patch, but if you haven’t already, perhaps create an option to only do the analysis every x-number of passes because more complex scenes may still see a bit of time to see some regions being declared noise-free and would also prevent areas at the start of the rendering being seen as noise-free because of it looking almost converged, but either incorrect or not filled with samples yet.

I don’t see any patch.

I wonder if you can use this along with some type of profile that would make the darker values stop when they are noisier, and the lighter values gradually less noisy, so you could approximate the way negative film noise is.

Oh, if it isn’t and this is mainly a compositing technique or feature proposal, then sorry about that.

Probably a combination of it being late when I posted that and figuring that there must be a reason why this thread would be in the Blender Code and development section as opposed to being in say, Blender Tests (which is where most of the compositing and usage techniques are posted), or general discussion (where most of the feature proposals are posted).

thanks for the feedback. no word yet from the lead developers and it’s been posted it here as well:
http://wiki.blender.org/index.php/Dev:2.6/Source/Render/Cycles/ReducingNoise

I think it’s better that you post this to your user page insteadand let the developers decide what is suppose to go into the cycles wiki. Otherwise it will be difficult to tell your ideas form the ‘official’ ones…

I’d say that if there was a way to “mask” (read: exclude from the computation) the pixels you don’t need to sample anymore, then this technique would be awesome. oh well, it should also be portable on the gpu though…
as far as i understand, this method would allow to set the global “graininess” (or smoothness) of the final image, and also it could boost the renderspeed a lot recursively excluding from the computation all the pixels that reach the threshold set for the final quality, therefore giving full cpu/gpu power to the remaining (noisier) ones.
For example in a scene with a large flat surface and a little glass ball, producing caustics, the surface would soon be cleared, and the sampling would be concentrated only on the small caustics area, optimizing calculation and time
Sounds brilliant!

i’d say it should be possible and easy, mixing again the threshold mask with a b/w version of the render

But isnt the render global, that is requiring succesively finer sampling from all of the image to refine those parts that you say “need” it more?

That would be awesome, because I like the way the noise looks, it’s just in the wrong place. I wonder how difficult it is to implement?

This might be the biggest optimization ever possible for Cycles, more important than all the other ones listed in the development list. Since the image would be less noisy you could finish a render in 50% or even 25% of the time, an incredible speedup. It will also eliminate fireflies completely.

I think the developers can improve this idea further and optimize it.
We need to put some pressure on the developers to focus their attention on this feature first !

It’d get my vote that’s for sure.

There should be some papers out there, but it is a big task to add this sampler. For sure it is not in the priority list, cycles lacks of more important features.

Btw already exist algorithms that are noise aware. They are called noise priority rendering or noise aware rendering or adaptive sampling algorithms. On luxrender forum you can find a lot of topics on this, i think they already developed noise aware MLT.

http://www.luxrender.net/forum/viewtopic.php?f=8&t=6855&hilit=noise+aware

I think the main problem is not to add bias to the rendering: you are trying to resolve an integral in different ways, no matter what method you use, it should give the same result. This is why all the unbiased method gives the exact same result(some faster, some slower)

By giving higher sampling priority to certain areas in your image, you are not introducing bias. Different unbiased integrators will not give the same result, they will theoretically converge to the same result. In practice however, different integrators will give dramatically different results (within any reasonable timeframe) in many cases.

The main problem will probably be the way the sampling works in cycles, the kernels will likely run on the whole image, with every pixel being a work item. It then would not be efficient to run the kernels on individual pixels, because of the way the SIMD parallelism on the gpu works, so you would probably use a 4x4 block of pixels or something like that. Then you’d probably have to worry if you’d get visible fringes at those block borders,maybe do some blending etc.

I think calculating an image and after some passes compare with the last one will always left noise.
I think it is better this way: Let cook the render for a number of samples defined for the user. And then look for every pixel their neighbours and calculate the difference and sum all the differences and you have a “weight” of differences with the neighbours for every pixel.
In the image you will have a pixel with minimum weight and other with maximum weight. And then a threshold defined by the user would select pixels with weight above that threshold to continue the rendering on them until render ends.

I’ve seen this at the mailing list:
http://www.karsten-schwenk.de/papers/papers_noisered.html
The short film and the images are looking very good. Maybe this could be transferred to Cycles? :slight_smile:

Sequence editor has as view option: vectorscope, luma scope and histogram to analyze images.
An addon also to extend these features … somewhere.

I had found a key combination to bring these up in image editor also … but can’t recall how.

yes that was from Mike I believe … I read through it and found it interesting…I am nerdy that way…it seems to be based on a blurring mechanism and not so much on how things are sampled, but it looked good to me…I’m no good with the higher mathematics involved for renderer stuff so I am mostly just looking at the pretty images(although I did read it)…so I don’t know how feesable or useful it would actually be for animation…
…let’s see what Brecht thinks…

in image editor T-key. Sadly, these are there but no image manipulation functionality is implemented … yet.

Do not dismiss these tools however, it is and has been useful in many tech fields and broadcasting for exactly finding and eliminating noise
and color ‘wander’ etc.

If anyone should develop any add-on or like, should consider adding it to these. Zoom into areas of image and blast away high luma pixels etc. However would this still be ‘physically correct’? and how does anyone know when that point has been reached???