Depth of field artifacts

I’ve been trying to do some renders with some DOF. I’m using 2.71 official. I’m setting up an empty where I want the focus to be and then adjusting the radius. It’s difficult because these are huge scenes and can only guess what it is really going to look like because of the render time involved. But each time I have made a render, I am getting strange artifacts somewhere appearing in various places. Most times I can do some spot healing in Photoshop and get rid of them.

Please click on the image to see full resolution. This is a bad example because most people are going to look at it and say the gold isn’t converging enough, but I have had these artifacts in many places where there is no gold. Also when using no DOF I don’t have the strangeness that is occuring with the gold in this image.

A good example on this image is what is circled in the upper left…

Here are my questions…

  1. Has anyone else experienced DOF artifact problems with Blender 2.71 official?

  2. I really don’t have any idea on ,when or if, I should be changing settings on the Blades and rotation settings?



Any thoughts will be appreciated. I’m going to do a different render and I’ll post that if there are any artifacts.

I think that I would approach this, for a variety of reasons, just by rendering the thing in layers and applying an ordinary “blur” to whatever needs it … as well as, probably, changes to hue-and-saturation to correspond to distance, atmospheric haze (and pollution), and so forth. This would also reduce your total render-time. Blurring of an already-rendered component that was rendered tack-sharp is easy, and in general you are bypassing the entire issue of having to re-render everything. (Especially, to “re-render everything without being certain of what the result will be.”)

A shot like this one on a bright summer day would necessarily be taken with a large f-stop which means lots of depth. Both the foreground and the sign would be sharp. And, really, you need artistic control (and … cheap control!) over what’s “inny” and what’s “outy” on this shot. Thus, rendering in layers, then a second compositing pass in which blurring (and color adjustment) is artistically applied. Let the renderer produce, one time, the accurate crisp result. Then, do the rest of it, so to speak “in post.”

Also, if I may suggest, “get that camera up off the floor.” You need, among other things, a separation-line between the foreground and the background. The photographer, undoubtedly, would have been standing up. The camera, after all, only has one eye open. As things stand now (and absent such things as a change of hue-and-sat), you’ve got no distinction between that railing and the buildings that are, ostensibly, a quarter-mile beyond it.

Fundamentally, the entire foreground would be sharp. It’s unbelievable for the nearest object to the lens to be blurry. In the sense that, probably, “no one would actually take a shot like that.”

Blurring of an already-rendered component that was rendered tack-sharp is easy, and in general you are bypassing the entire issue of having to re-render everything

You may be right on this sundial… I have plenty of tack sharp renders of the casino… and I could probably get very quick DOF doing it in after effects or photoshop… but I screwed up when I did the original renders and never rendered a separate z pass that I could use in other programs. If I still have all the cameras set up, is there anyway I can render JUST a zdepth map without having to rendered the whole combined scene again?

One thing I wanted to try was focusing on a key element… like in this example the firepit (that’s where I placed my empty) and doing some renders with DOF… but it’s failing because of these artifacts… do you have any idea what could be causing them? Again this is a bad example because of all the gold, but they are really occuring in some odd places… like in the middle of trees in the background or like that one in the upper left that is circled.

Hi,
Did you tried to increase your samples ? I think it is just noise.

See a test scene at 10, 50 and 100 samples.




What I normally do, is to have an emission material with the RayLength connected to the color, and use this material in the RenderLayer.
And, of course, setting all bounces to 0, as we don’t need them.

@cuise-T That render is actually 2500 samples. Tomorrow, I’m going to do a small border render of just one of those areas where the gold artifacts are occuring with no depth of field and see if I have similar noise. If you look at the short film in my signature called “Final Inspection” you will see numerous daytime renders. But I did not use DOF at all, in those renders, I never had the strange artifacts or excessive noise. But I guess I need to do a small test again tomorrow, and make sure something didn’t change.

@Secrop Hmmm, I must say I’m a bit confused on what your suggesting. This is a very big scene with multiple objects, all with different materials / textures… Is there any way to clarify what your proposing with like a node setup or something? I’ve rendered all of these different camera angles of this hotel… but none of them were done with DOF each render took about 3 1/2-5 hrs at 3000 passes. So what I wanted to do was go back and just somehow render out a zpass only with out having to rerender the entire combined render which takes a very long time.

Here’s a fast setup:
create a new render layer, and change it like this.



So i’ve tested just to be sure, I rendered one scene that took me originally 36h to render, with this settings changed, and it rendered in just 10m.
I’ve used 32 samples because it gives a nice antialiasing. If you have in your scene objects that are very thin (say 1/8 of a pixel, like small branches of trees) you can increase this a bit. You can also check the Z pass, but it will be the same as the Combined render.
Because we overwrite all materials to emission, there’s no need of having bounces, so set the max bounces to 0.

Edited: Rendering just the Z pass, will render the Combined anyway, so unless you overwrite the RenderLayer material, it will be as a normal render.

What your seeing are simply bright small reflections (such as the sun) being blurred due to depth of field.

OK… just to rule out that something occured in my file, prior to doing these DOF renders, I turned off all DOF and re-rendered at the same number of samples- 2500. For time reasons I just did a border render of the real troublesome areas.

I think it is pretty apparent that with no DOF, the gold does converge pretty noise free. So I can only conclude that the DOF is causing these strange artifacts.

Please click on the image to view the full resolution.


Yes, you can generate a Z-depth-map only … use BI for that, why not. It should take just a few seconds to generate if you turn everything else off. As long as the camera positioning is exact, and equivalent, then it doesn’t matter which engine produces the data-stream.

You can also, at the same time, put the various things into layers and/or give them object-ID numbers and use this to further differentiate the objects within the shot.

Put it all into a MultiLayer OpenEXR file, which will neatly capture all that information at once.

Then, I definitely would use compositing to finish this job. Let the renders be tack-sharp, and if you already have such renders (in a high-resolution format such as … uhh, MultiLayer OpenEXR …), then you probably don’t need to re-render anything. (In other words, "you can make the shot work, somehow, good-enuf, given what data you now have.) Use the map(s) to isolate each component, then apply different amounts of blur and desaturation to each … sparingly at first.

Since this step will be using only data taken from files, and will be doing only 2D-based operations (with Z as one of the available inputs), the results should be obtainable in real time. Play it through, nudge a “knob,” play it through again.

One of the “li’l trix” of this strange trade is that, although you can ask a render-engine to produce everything for you all-at-once, wrapped up in a little bow, it’s almost never time-efficient to do things that way. Instead, generate all the parts separately, spread them all out on the shop-floor, and use post-processing (compositing) to stitch everything together. “Generating all the parts” is the only time-consuming step, so generate each one separately, one frame at a time, all to ML-OpenEXR files, carefully cataloged and immediately backed-up. (When you find that a particular piece needs to be re-done, keep the original outputs and the blends that produced them, and then make "‘the next generation’ of that component.) Because you quite-literally don’t have time for “cross your fingers and wait X-hours to see if it works this time.” You don’t have: time.