can someone explain defocus set-up in cycles?!?

hey,
can someone explain to me how defocus in cycles compositing is supposed to work?!?

  • my idea was to decide on depth of field in compositing so i set the camera to a f-stop of 128 (with the focus set on my main object) and then rendered out a multilayer-exr.
    in compositing i then plugged the “depth”-output of the renderlayer into the “z”-input of the defocus node and my main image into “image” - but nothing happens.
    i think the problem is the depth map itself since it looks almost empty when connected directly to a viewer node (white with some single dark pixels - these are where glass objects are in the background)… what did i do wrong here?!?

  • also, how do i decide on my focal point in the defocus node (in compositing)? or is the setting of the camera somehow rendered into the z-map?!? shouldn’t that be just depth from front to back?!?

  • and last but not least, how do i render ONLY the depth map again? rendering the whole thing took like 16 hours and i’m not too keen on doing this again. when i deactivate all passes but the “z”, blender still starts rendering the whole image anew…

a lot of thanks for any help!!!
karl

p.s: not directly the same but somewhat related: why do i have “exposure” and these “film looks” in the scene tab and not as a compositing node?!? plus a second “exposure” setting (which does not seem to work though) in the render tab/film… what’s the difference? or more generally: what ui-concept is behind all that which i did not understand?!?

One question, why are you trying to do the depth of field in compositing? Cycles is designed in part to do physically accurate camera simulation, so things like DOF can be done “in camera” so to speak. This is what most of the options in the camera tab are for. There are reasons you might want to do it in the compositor but speed isn’t generally one of them, in my experience cycles handles DOF scenes without loss of speed.

The depth map is not likely empty, they look white because the data is generally in greater values than our computers are able to display. Try this: Connect the depth property to a math node, set this to multiply with 0.2 or 0.1 and then connect that output to the image.

In the defocus node make sure you have “use Z-buffer” checked and play around with F-stop values in 5 to 10 range. 128 is a really high fstop

Articles I’ve looked up seem to suggest that the camera focus settings (on the camera tab) do effect the defocus node filter. I don’t know why this is.

I’m sorry I don’t know how to simply rerender the Z-map, because I don’t usually use the compositor to do DOF. Usually I just set the viewport display type to “rendered” and then adjust camera settings to watch how the DOF effect changes before rendering it out.

hey,
thanks for your reply!

  • the idea behind adding dof in compositing is flexibility… i mean, is the dof directly applied when rendering qualitywise any different from the one applied in post?
  • regarding the depth map, you’re right: when i do as you say (or put a “normalize” node between depth and viewer) it looks perfectly fine - no need to render again… : )
  • last but not least i think i also figured out my main problem: i used a different scene for compositing and the defocus-node in compositing pulled the focus from the wrong layer - it seems that it really looks at the setup of your 3d-camera (even if in another scene) do determine where the focal point is…

anyway, thanks a lot for your help!
k

If you don’t want to re render a whole cycles scene why not duplicate it and revert the dupe back to BI just using the z depth pass. Use that in you compositor scene. You could also retarget the focus of the camera in that scene so long as you don’t move it around.

ah! thanks for the clarification regarding dof in render/in post! since my scene does not contain much shiny or transparent objects i did not see a difference but i’ll keep that in mind for my next project!
k

But its sooooo much faster when you don’t have to use samples on blur. However I have had to resort to split render layers for the foreground and background etc. As they don’t separate well.