Cycles proposal; A way to increase performance

What I’m talking about is this.

Basically, this includes rendering two images in one session, using a comparison to measure noise levels, and disabling/stopping tiles when a certain convergence threshold is met.

Here’s how it would be done.

  • Starting a Cycles render, you would be having two image buffers going in the background (one with the specified seed and one using the seed value+1).
  • When each sample of both image buffers are complete, they are combined which will create the image you see in the viewer
  • Every N samples, the noise level would be measured in the following way; (buffer1-buffer2)+(buffer2-buffer1), This will a good measurement of how distant the color values in each buffer are from each other. There would also be a clamp to not allow values below 0 to ensure a non-zero result.
  • The advantage of this is that the noise buffer will become more accurate with time and converging areas would see a decrease in the noise value (we may not even need to expose an interval setting to the user), we could follow that by measuring the maximum distance value seen among all the pixels in each tile. If the maximum value found in a tile is below a certain amount, the tile is declared converged and is either stopped in tile render mode or skipped over in progressive-refine mode.
  • The advantages would be as follows; it will work with any sampler that can work with the tile system, Sobol, CMJ, Metropolis (if and when implemented), you name it. It may be a bit more straightforward to implement than true adaptive sampling and may bring out a more tangible render time reduction. It may also bring a slight improvement in sample distribution by reducing the amount of visible clusters that might be made by the random number generator, and an implementation may not be as involved as it also won’t need any fancy algorithms.

What do you think, DingTo, Lukas?

you mean standard deviation aplied to sampling ?..
Better maybe do it on a tile base instead of image base (requires less memory).
ea store Nth historic tile and see how it converges to a certain value, based upon stats.
statistics wont lye (hehe) but hm might work, like how we see that an rendering is nearing a end color.

But then i dont think having 2 image seeds would matter, ea it would mean one set has 20 samples while 2 other sets have 10 samples.
It would be equaly accurate (SD variance decreases by sample rate)… Btw its not needed to store all historic tiles, i think 8 nth historic tile be enough, and could be weighted too (last rendeer has more weight then first).

…wasnt lucas allready on something new ?.

When rendering in cycles, the first 20 seconds takes for some build up time and only the cpu is working. The Cpu could work on that while GPU is rendering.

Wouldn’t that operation be always 0?

What you just described is basically similar to measuring variance (deviation of samples from a mean value) but with just 3 samples. In practice you could use a statistical formula such as σ^2 = Mean(x^2) - Mean(x)^2 to calculate variance for the whole sample history.

Looks like you got it figured out

Guessing that means the proposal was accepted?

A developer understanding an idea and accepting a proposal are not the same thing, at least in my universe.

Not at all as long as the minimum value is clamped at 0.

Actually this is essentially the same approach as the even-odd-buffers that LuxRender uses for adaptive sampling. It can indeed be used for variance approximation, and has some nice properties compared to the other ones (especially E[x²]-E[x]² tends to have numerical cancellation problems).
That’s not really what is holding back adaptive sampling currently. For once, this has the same problems as every variance-based approach: For example, imagine a scene with complex caustics. Now, after e.g. 16 samples, the caustics might not appear as all yet (since the sampling is random). This approach would find that this spot looks the same in both images and stop sampling, therefore missing the caustic. In practice, you’d get perfectly converged spots around initial fireflies and no caustics elsewhere. Essentially: Both images look the same != Both images are fully converged.

But the main reason is that in the near(ish) future, Cycles will get some kind of filtering/adaptive algorithm (these two usually go hand in hand in recent algorithms). However, when we implement a “good enough” adaptive strategy now, we have to stick with it until the next compatibility-breaking update (which will take some time since 2.8 has only just started). Therefore, it’s a tradeoff between okayish AS now or (hopefully) great AS in the (again, hopefully) not-so-distant future…

I allready tried to say that, but for computers ABS(a-b) is enough… no need to quadrate.

How about the engine doesn’t even start the comparison process until after 100 samples or so (if a caustic doesn’t start forming until after that, it probably was not going to have a chance of converging anyway).

Plus, it should be less of a problem if you’re making use of ‘filter glossy’ (which greatly improves the chance pf Cycles generating nice caustic effects).

It may even be a non-issue if and when Cycles becomes capable of metropolis sampling (because of its proficiency in caustics).

However, you obviously know about the render code more than me so if the recent published algorithms are at the point where the vast majority of scenes render far faster then I would understand that.