[tackling noise]samples vs reselution

i’ve always knew when shooting videos (with a camera), that, if you want to improve quality (relatively easily), you should film in RAW, and film large, then shrink it, this way, mmany smaller anomalies should disapear, while at the same time making the equipment look better

now, how about we applying this to ray traces

lots of times, there will be alot of noise in a render, but if we increase the resolution, then downscale it, it’d achieve similar results.
which way would you prefer?; try it out, you may find something new
(i’m too lazy to write up my research on the rendering times between resolution and samples)

1 Like

The fly in your ointment is that a shrunk picture of noise looks even noisier.

Downsampling is equal to adding more samples per pixel. If you render 4x more pixels there is no practical speedup in comparison to rendering with 4x more samples. In practical world, more samples equals more light reaching sensor. More light can come from longer exposure, wider aperture or bigger area per pixel. Downscaling essentially averages light from bigger area.

This very much has an analog of the real world with a camera. If you are trying to take a picture in low light, you can do two things, increase the speed of the film (ASA) or you can increase the exposure time. For rendering the film speed means more/faster CPUs or GPUs, longer exposure time means more samples per pixel.

The best solution is probably some kind of denoiser like what Renderman has?

Hm i think it might help to take a good camera, because what you describe i never do with my camera.
However for me shooting 3 minutes equals around 1GB of data, noice free if light conditions are ok.

And as for Blender, and long render times, try the unofficial branch with Adaptive sampling, it can do with a bit less samples.
Because in there the error noice rate; is used as a limiter to decide if a next tile can already be done before the (old normal max) smaples per tile has been reached. As far as i’ve tested its the only way to really reduce on render time; but read its whole thread so you get to understand how it works.

what kesonmis wrote

Not quite. Adding more samples won’t handle overbright pixels that will alias horribly. Downsampling will.

Downsampling won’t handle very bright samples any better unless you clip pixel values before or during downsampling. Take values 0.5, 0.5, 0.5 and 2.5 for one channel in 2x2 pixel area for example. Rendering it in half resolution averages the samples to 1.0. Downsampling either does the same or smears the values around several neighbouring pixels thus blurring the image more than it should. Sharp image is not equal to aliased image and vice versa. Aliasing is a sampling artefact that is more likely to happen during scaling with exotic filters than during rendering in cycles.

I agree with you even if others say otherwise.
My render is still noisy at 1080p when I increase samples from 128 to 250, noise dont go and big bright noise appear

However, when I have samples at 128 and render 1440p image, the bright fireflies or noise are smaller and when I downscale it to 1080p it look way better than the original 1080p with double samples.

So I agree with you it works