Any advantage to rendering at higher resolution and then scaling image down?

If someone has a scene that has a fair amount of fine detail, is it best to render at the size you need, or is there any advantage to rendering at a higher resolution and then scaling the image down?

For instance, if you know you are going to need a 400x200 final image, it is best to render at that size, or is there any technical advantage to rendering at, say, 800x400 and then scaling the rendered image down?

My inclination is to say that it seems best to let the renderer produce an image at the necessary size. But I am just wondering if there is a possibility that a larger render might produce finer detail – detail which may even be noticeable to some extent once the image is scaled down.

Many people might say you could do both and see which looks best. But I am trying to figure out if there is a well-understood technical reason why small details from a larger render could translate through to a scaled-down image.

That’s essentially the same thing as supersampled antialiasing. Although direct AA is better, I would argue, because depending on the reduction algorithm you could end up losing a lot of detail during the reduction that more AA would likely pick up anyway.

Basically, unless you’re performing an operation without native, high quality AA (like Cycles baking) there’s no real reason to waste the time rendering at a higher resolution.

If you are using cycles, one of the annoying things you may notice in your renders are ‘fireflies’: little pixels of light that appear at random near light sources or strong reflections. One technique to rid an image of fireflies is to render large then scale the image down. The ‘firefly’, which is a tiny detail, gets lost in the scaling process. In addition, I believe some scaling algorithms will produce sharper diagonal straight edges when scaled down from a larger size.

the same thing as supersampled antialiasing
Oh. OK. Never mind…

Welcome to BlenderArtists :smiley:

There is advantage, cycles fancy filter importance something leaves a lot high frequency noise in shadows.

Gamers claim that supersampling is superior to traditional AA methods, because you have more detail in the scene, especially for distant objects. Didn’t find a comparison right away, still you can look into this image search: https://www.google.de/search?q=supersampling+vs+multisampling&tbm=isch

If I had like unlimited render power I’d definitely consider supersampling. But for now it seems like just another thing the average user can’t do because of speed limits.

I suggest that you focus your attention only on the final bitmap … the final, deliverable “frame” that the user will actually see. There’s no point in laboriously generating “twice as many pixels horizontal, and twice as many pixels vertical, equals four times as many pixels and ditto as much time,” just to throw-away three-fourths of “all that data” by a process of averaging.

Sometimes you can actually do the opposite: rendering (say) distant objects at a lower resolution because the value of adjacent pixels doesn’t actually vary that much. Sample it upwards, then add a tiny bit of random-noise and blur. No one’s the wiser, especially if there’s any sort of motion going on. Only the objects that are closest to the camera, and most pivotal to the purpose of the shot, actually demand “über resolution,” and every second/minute/hour that you can shave off is “a penny earned.”

“A tiny, tiny amount of blur” can sometime be a very wonderful thing.