First of all, I never coded for Blender and don’t know if I might, so this is a suggestion for other developers if that’s alright in this forum section. I only wish to request a simple optimization for the Cycles rendering engine, which popped to mind as I again spent hours trying to get less noise without huge render times. If this little feature could be added, I believe render times could be cut drastically with barely any extra noise being introduced. If the idea can’t be accepted in mainstream for any reason, perhaps it can be attempted as an addon.
The problem I see is this: For both the branched and non-branched path tracer, you specify a number of samples. Cycles will do that many samples for each pixel or branch, regardless whether it improves quality or not. So while sending 100 samples in a bright area lit by few lights will cause each sample to reduce noise, sending 100 samples in an area which is barely (or not at all) lit is a waste of resources, as the first 10 samples will already define the final result and 90 more are sent without visibly improving anything.
My suggestion is to allow a threshold, based on which we stop sampling a ray once it stops improving quality. This can be detected by checking how much the colors of that ray changed over a given amount of samples. If more than X samples have been ran but the color of that pixel hasn’t changed more than Y, sampling for that pixel is ended. The two optional settings which should be introduced are the number of samples between which to check, and the color difference below which to stop.
A simplified example: The number of total samples is 100, while the threshold is a 0.1 color difference over the course of 5 samples. Cycles sends the first ray for that pixel, and gets an RGB color of 0.5,0.5,0.5. It then does 5 more samples and checks again; This time the color is 0.75,0.5,0.5 (more red), which means there’s a 0.25 difference for one channel and it should keep going. So it again does 5 samples in a row, and this time gets 0.875,1,1, meaning the difference is 0.125 and it keeps going again. 5 more samples are executed, and now the color is 0.9,1,1. That means a 0.025 difference which this time is less than 0.1, so sampling is stopped and the current result remains the final color of that pixel.
I thought of this after I noticed how, as samples are being sent, the quality of the image improves more and more slowly… starting from very grainy, quickly becoming acceptable, soon becoming pretty clean, and from that point on you only notice the ongoing improvement if you look very closely at the screen. But the process is different for each part of the image, based on the lighting and surface shaders… any part can take a different amount of samples to reach the same level of clarity. If we could harvest this gradual improvement per pixel to define where sampling should stop, I think it would be an amazing optimization! How could this be done please?