Measuring Noise in Cycles Renders

I think calculating an image and after some passes compare with the last one will always left noise.
I think it is better this way: Let cook the render for a number of samples defined for the user. And then look for every pixel their neighbours and calculate the difference and sum all the differences and you have a “weight” of differences with the neighbours for every pixel.
Brecht says that this (my) method will fail :frowning:
I don’t belive that - i demonstrated that it is really possible to detect NOISE

example:
3 samples 100+99+101 = 100 (average value)
3 samples 50+140+110 = 100 (average value)

as you see 1. pixel is stable (low variance)
and 2. pixel is non-stable (high variance)

cycles do not detect that - it sum only every value

and this is the key!

So i am working on it now and study cycles-code - i understand some parts of it.

I’ve seen this at the mailing list:
http://www.karsten-schwenk.de/papers…_noisered.html
The short film and the images are looking very good. Maybe this could be transferred to Cycles? :slight_smile:
this works bit different - but look interesting

When I read it, it sounded to me like you’re not taking as many variables into account as needed.

Generally when you look at it, you would have to take the color and normal data into account as well.

He mentioned that some areas will just be noisy in general, and that would be true for areas that have high frequency color maps or bump maps (though the noise with color maps won’t seem quite as obvious).

So generally, calculating the average discrepancy between pixels for any one area would depend on the inherent differences between pixels assuming it was rendered as basic shadeless color combined with the normal data, so as to not be infinitely sampling areas that will always look noisy because of the material.

I have tested a compositing trick to reduce noise:


It uses three copys of the scene and it reders with different seed. First it has mixed the three scenes and then it uses bilateral blur to reduce the noise with adjacent pixels.

For a mixture of traslucent and transparent shader, the result is this (three scenes x 100 samples each):


It took less time than this render (one scene 300 samples), and the result in thi case is worst:


Luxrender uses noise reduction in its tonemapping software. If a blender addon could do this (I don’t know how), I think it will be useful.

Ace,

my method should also work with noisy-areas (also high frequent textures)
the trick is to find stable and non-stable pixels

But like most post-processing tricks, you lose some of the more subtle edge details (or at least blur them)

The most obvious loss in edge fidelity can be seen on the left eye where’s there’s less difference between the edges, in the noisy image the edges are perfectly sharp while you lose that in the composite.

That is what the noise-reduction paper posted earlier in the thread tries to solve, and in scenes without a lot of specular or glossy materials can solve it when it has enough samples, but not so many that it wouldn’t save you any time.

congrats on that and glad we both had the same idea! You are making fantastic progress with your code. congrats!

One day in class, the teacher assigned his students to write a composition – If I Am a Manager.All the students began to write except a boy. The teacher went to him and asked the reason.I am waiting for my secretary, was the boy’s answer.

Apart from teachers, students and secretaries (???)…
Is any of you guys (hymie & TS1234) even still thinking of a way to implement this idea in cycles?
I still think it’s a clever technique, and at least it would mean “final render quality parameter”, which would make it possible to have a bunch of different renders to look the same (same level of smoothness), regardless the rendertime

I’m starting to wonder if things could work a little better if one implemented something like a dynamic screen-space importance map that initially sets the importance based on the ‘samples’ parameter for objects and materials, then having regions dynamically change their values based on how many outliers/fireflies are in a certain area.

I would imagine it being somewhat like a greatly expanded version of what Mike Farny implemented for the environment, the second part could really help speed up caustic rendering.

Now it wouldn’t be full-on noise measuring, but if it worked, could be a very useful optimization to have which could then go on top of the noise-priority rendering if one can find a method to do an implementation that really speeds up rendering like originally proposed.

Meanwhile, hoping for a GSoC surprise (crossing fingers), i found this paper: http://gilab.udg.edu/publ/private/container/publications/jaume-rigau/Entropy-based%20adaptive%20sampling.pdf

Oooh, killing aliasing with noise too. Cool.

Heh, the results of the classic contrast algorithm compared with the entropy-based sampling is almost like night and day, it seems like that this it the closest that anyone has gotten to a true, production-ready adaptive/noise-aware sampling algorithm.

Brecht did say last year that he’s had trouble finding algorithms that worked well with things like shadows, maybe this will be the thing that he was looking for.

I just hate all the papers are written using two columns. Having it in full screen I must to continually use the mouse instead reading for a lot of time on a sofa.

But there is the problem with the formulas. In just a column using formulas would make a waste of space. Perhaps just putting them to the right, same with images, so it would be one column and sometimes two when formulas or images appear.

Though you have your points here, i still want you to consider the fact that sofas (aka couch sofa, settee, divan, or canapé) were introduced in western europe by Ottomans.
Many variations were made through the centuries, but still we can’t make renders of them with adaptive sampling, in Cycles

Anyone still working on this? I find it really interesting, going through some of the papers now. I almost feel like taking a dive into the cycles source-code and see if I can get a grasp of it :slight_smile: Cycles needs to get faster/more efficient!

I thought about another thing:

The noise comes from the fact that ALL GI shaders are sampled. (a clear glass/glossy is IMMEDIATELY noise free)
Why not make an lower res irradiance approximation in the renderer (i know it IS biased GI) as an OPTION for animators ? It would be extremely fast and you wouldn’t notice the difference (although biased).
All rendered use this PRman with its Reyes GI, Mray with it’s FG approach, etc…
But the MOST important thing would be that it is INSANELY fast on a GPU! @ 64 Samples completely noise free! And 64 AA is enough for any edge or texture. Also not to forget that Cycles is an animation biased RGB renderer, so some more optional biasing that you don’t notice is allowed.

The problem with techniques like irradiance caching is that it leads to artifacts that are both harder to control and less temporally stable, which makes them problematic particularly with animations. High-Frequency noise on the other hand, can to some degree be removed as a post-process (potentially re-using samples from neighboring frames in an animation).

That’s the reason why many modern renderers (like Arnold and practically all GPU-raytracers) are going with brute-force GI. It’s more expensive, but predictable. Pixar is also looking to switch to a completely monte-carlo based solution.

Of course, nothing is stopping anybody from implementing it anyway, it’s just that there’s nobody doing it. There’s still a couple of vital features left to be implemented before looking into optimization, anyway.

EDIT:

And 64 AA is enough for any edge or texture. Also not to forget that Cycles is an animation biased RGB renderer, so some more optional biasing that you don’t notice is allowed.

Cycles isn’t biased, but it doesn’t really matter anyway. If there was some technique that was biased but that didn’t have any other obvious drawbacks, it would be the go-to technique for everyone. The reality is that no such technique exists.

I really want this feature, has there been any progress?

Yes, or at least something like it: https://developer.blender.org/T38401

Thanks for directing me. I really want this feature to be incorporated into a proper release soon, I feel like it’ll make a lot of the renders I do go a lot faster :).