How effective is rendering with less samples with higher res and down-scaling?

Run some info by me, link me to some tests.
I’m very curious to if this will reduce the effective noise that one would see after down-sampling.

I’ve heard good things about this, but I’d like to know more so I can possible put this into practice.

My overall goal here is to achieve renders that have similar render times but have less noise overall.

It’s called Super sampling

And it’s great for Anti aliasing…
I don’t know about noise…

hmmm

basicly then it becomes dependend of scaling method, but what if you would optimize it there.
…thinking

so then for example basicly you get 4 pixels for 1 pixel.
and 8 surounding pixels would be 32 pixels.

I am thinking with little coding one could determine the average mean if it is in range of some HSL distance
And if so take an average. … or from all those pixels take all pixels that are in HSL range, and next use their average.

From the 32 pixels spike detection might be also solvable, not sure but if one of the 4 pixels is above a treshold take the average of the other 3

. i could code that, but it be not in c++ or python,
unless i knew how to adress image pixels in blender-python then i might be able to do it in python.

  • this is why compositor should have a python script node.

Not really. Whether you render at 2x the resolution or 4x the samples per pixel, the resulting amount of samples taken will be the same.

However, if you have outliers (hot pixels), overbright pixels (beyond the dynamic range of 24-bit images) or otherwise aliased content (maybe due to compositing) in the final image, it can make sense to downsample the (tonemapped, LDR) result.

Without actually testing it my assumption is that there would be no real benefit in terms of speed vs noise.

Let’s say you render a 1920x1080 image at 128 samples and downscale it to 960x540. If that’s averaging 4 pixels (each 2x2 group) into a single pixel, that’s essentially an average of 512 samples (128 samples x 4 pixels).

That does mean 1920x1080 with 128 samples downscaled to 960x540 would have 4 times the amount of samples as 960x540 with 128 samples. However a 1920x1080 image would take about 4 times longer to render than a 960x540 image.

So it’s very likely the amount of noise and render time of a 1920x1080 image at 128 samples would be equal to a 960x540 image at 512 samples.

The only real benefit would likely be improved anti aliasing like fdfxd says.

I do think it can reduce noice/hot pixels with the right math, but i need to know how to get access to pixels in a blender python.
if anyone know how to get set a pixel of a render result in python let me know please

Don’t forget when calculating the time it takes to render to include the time required to downres and rerender the sequence. It might not seem like a lot, but it all adds up.

What if you apply a denoise filter in post? Normally that results in a loss of detail, but if you’ve got four times the pixels, you can avoid the detail loss, or at least mitigate it somewhat.

So I started experimenting with this since I am working on a project that involves a lot of volumetric lighting and thus has a really bad problem with noise, and thus requires a disproportionate amount of sampling. So I managed to get this:

I used 1200 samples and rendered it at 1920x1080. After rendering it, I put it into Resolve and applied some noise reduction. Of particular note is the area surrounding the light which was(and still is to a degree) a bit noisy. The original render was still a bit rough, so I used a power window on it to use some really strong noise reduction that I didn’t want used anywhere other than where it was absolutely needed. I also applied a little bit of color grading for aesthetic reasons. Anyway, I rendered the same frame out at 3840x2160 and applied all the same effects in post, with the only difference being that I downressed it to HD after everything else, and this was the result:

You might notice that for some reason the downressed version is a bit darker and a bit less contrasty than the HD one, this is not a result of the post effects I did, but carries through from the original render. I can somewhat mitigate it by color correcting to match, so it’s not a huge deal. In the end he verdict is that the 1200 sample image took two hours and fifteen minutes, the 300 sample image took two hours and six minutes. So there was a nine minute difference in render time which is a six percent time savings. Add in the extra second per frame it took me to render out the downressed and denoised plate, and we are looking at a definite improve, though nothing dramatic, it does add up.

As a side note, I was actually quite happy with the results from using Resolve’s noise reduction capabilities and am wondering if I can use it to knock more time off of my renders.