Cycles - Depth Of Field NOT on background image?

I have one single object and I’ve set up a background image by nodes (texture coordinate “window” into mapping as texture then into “image texture” into “background” into “World output”) and now need to apply DoF, but only on the object and not on the background image! Anyhow knows how to do this? Can I take the background image out of the DoF pass?

Basically, I just want to render a 3d object to match the depth of field that’s already in the background image. This is my setup: http://oi59.tinypic.com/2j1wg9j.jpg

Thanks for any help!

Could render them separately and then combine in the compositor http://www.blenderartists.org/forum/showthread.php?305078-Render-elements-of-a-scene-separately-and-then-combine-them&p=2452953&viewfull=1#post2452953

I would definitely use compositing here … and I would explore what can be done using only the (2D, but depth-aware) transformations that the compositor can do.

For example, if you have your render-output in a “MultiLayer OpenEXR” file, then you can include in that file a “Z-depth pass” which tells you, pixel-by-pixel, how far away from the camera the object that produced that point is. You can therefore apply that data (using a curves-node to give you precise control of it) to “blur” points of the image, thereby creating the acceptable illusion of DOF, but without the cost.

(And if you don’t have such a pass, just run the model through a greatly-simplified Blender-Internal render, just to get the Z-depth map.)

“As a blanket statement,” I’d suggest that you should look upon the renderer (any renderer …) as a producer of raw-materials for subsequent compositing (“digital darkroom work”). As Ansel Adams, the famous photographer, put it: “A picture is captured in the camera, but it is made in the darkroom.” The same sort of reasoning applies here. The renderer gives you an array of pixels, and corresponding arrays of other information. It doesn’t give you “the finished scene, all in one swell foop.” You don’t ask it to. You use whatever renderers will give you the data that you want, as quickly as you can get it. Then, you take it all to the darkroom.

You can “remove” the Background from DoF Channel with some Math Nodes.
Use Greater Than (or less than) on DoF Channel and second Value 10000000000.
Then Multiply DoF Channel (With Math Node, or Color Node) with the Result from Greater Than. (Or less than)

(I write Greater Than/Less than cause i somehow always mix up the functions of those in Blender, but one will give you the right result. Just try both settings.)