Render elements of a scene separately and then combine them

I’m fairly new to Blender, have only been working with it for about a month. I’ve been reading and watching hours and hours worth of tutorials, but there are still some very big concepts I feel I haven’t quite learned.

I’m working on a fairly basic, slow computer and have to use my CPU. I’ve found that rendering both the skin nodes and hair particle system I’m currently using for characters I plan to use in a digital art project in Cycles is more than my computer can handle in any sort of practical timeframe (as a matter of fact, I’ve never been able to obtain a full render of both together despite hours of rendering).

However, when I move the hair particle system to its own layer, I can render each layer independently quite quickly (well under half the time for the two of them together - a few minutes each). Is there a way to render each element separately and then combine them within Blender? I’m almost positive that there is a way to do it but I cannot find the right information to help me do this. Is it somehow a part of compositing?

I’ve found that I can render each layer and then save images from the renders on a transparent background and assemble them in Photoshop (something I am FAR MORE acqauinted with) more quickly than it takes to render an image with both of them in it. This isn’t 100% effective, however, as it doesn’t take into account that the hair falls both in front of and behind the model’s head (i.e. I would really need three layers to work with).

Yeah, you could use render layers and compositing nodes. Not sure how that will help with the memory issue though.


Add a render layer for each part and specify their properties by selecting the corresponding render layer first.

Layer -> Scene:Render layers work with scene layers. By default, you have to activate scene layers before rendering so you don’t have to specify them in the Layers section -> scene.

Layer -> Layer:The main thing to set is which scene layers selected render layer will contain and that’s done in this section. In the example, head is on scene layer 1 and the hair is on scene layer 2. Render layer 1 (named head) contain scene layer 1 and render layer 2 (named hair) contain scene layer 2.

Layer -> mask layer:Because the hair reaches behind the back of the head, I set hair render layer to also have mask layer set. Head layer masks hair layer.

All that is left is to combine those two in the compositor, which is straight-forward enough: duplicate the render layer node (shift+d) and set the other one to show the hair layer, add alpha over node (shift+a -> color -> alpha over), and connect head layer to the top socket and the hair layer to the bottom socket.
(Mnemonic for node input socket order is that factor value 0 means top socket and 1 means the bottom one).

cycles_hair_composite.blend (712 KB)

Thanks. I don’t think it necessarily cuts down on time, though when I set things up to be composited this way, it did seem to cut down on the render time and was a bit easier on my machine.

Also, but the mask layer thing was something I definitely wanted to learn but was having trouble with. As I mentioned, I plan to work with my renders in Photoshop layers as well, so this is really helpful to me. Now, I know how to render each layer separately and export them as .png with an alpha background, but is there a way to export all the layers at once to separate files? Again, with Photoshop layers in mind?

Yes. You could use file output node in the compositor and it saves files automatically when you render.


I included all passes for the first render layer to show that you could save any of them, from any render layer. The thing is that some of those passes include information that can’t be saved in conventional image format but you need format like EXR for that.
That’s why it’s easier to render normally and save in openEXR multilayer format, which saves all layers and passes in one file.

Ok, so I’m returning to this after a few other Blender tangents and still have a few questions.

Does the render layer input consist of layers that have already been rendered? OR are they just previews of what will have to be rendered again once everything is rendered again? If it’s the first option, can the layers be rendered differently from one another? Say for the final image I want to render the hair at something like 1000 samples and the ball/head in only 300?

Also, let’s say I have the final image that I want and I decide to completely alter one object on a dedicated layer of its own. Can I just edit that layer in Blender then re-render said layer, leaving the other rendered layers alone in the compositor? Or does everything on all layers have to be re-rendered.

And, while I know this will sound stupid, what exactly is the composite image? Is it the final image, or does it all have to be rendered out Does this have to be rendered all together or can I just save the image I see in the background of the composite editing screen?

First of all, the word “layer” is one of the most-abused words in the Blender vernacular … for purely-historical reasons that we can’t change now. It could mean “the twenty buttons.” It could mean RenderLayers. And so on. It’s “kinda-sorta” like Photoshop’s concept for that word … and, heh, these days, Photoshop’s use of the word’s getting more-and-more like Blender’s … but, “not exactly.” Think of them as apples and, well, plums. Usefully similar, tasty, but not the same. :slight_smile:

Let’s talk specifically about “RenderLayers.” Basically, this concept acts like a named filter. You specify certain criteria to choose the objects that will be included in the RenderLayer, and you specify various options about exactly what you want rendered or extracted concerning them (shadow, Z, diffuse, blah-de-blah), and you give the whole thing a name. And now, having done that, you can do two very useful things with them:

  • You can reference them as a node-type in a rendering “noodle,” where they serve as a source of input data.
  • If you send your render-output to a MultiLayer OpenEXR file, all of the named render-layers will be included in that file, each separately under their own name. When you subsequently reference that file in a File Input node, you can select any of the RenderLayers, by name, that the file contains. All of the information has been captured in a high-resolution numeric format (so-called “HDRI”), with no “loss” or compression. This is not an “image” file, and it’s not “small.” It is a high-resolution, self-describing, numeric dataset that is specifically designed (originally by Industrial Light & Disney :wink: , no less …) for storing CG data.

Don’t expect to be able to “render this-layer at this-resolution and that-layer at a different one.” (I think that this would become quite unmanageable, very quickly …) But you can use Blender’s “linked libraries” feature to do something along these lines, by linking to objects, cameras, geometry, lights and so-forth in a (Blender …) file from multiple “output-generating” files which specify different render settings.

Each layer can have its own sample count. In my screenshot it says “Samples: 0” and 0 means it uses global samples set in render properties -> sampling.

You can render just one render layer. There is that pin button next to the render layer name. Push that and it only renders the active render layer. You could also disable other render layers.

Composite image is whatever is fed into composite output node.
For example, you don’t have to render the scene. If you enable compositing but remove render layer node, it doesn’t render the scene. You could then drag and drop a multilayer exr image (or other image/video files) in the compositor and use that as input.

If you rendered an animation to image files, you can set the input image to be a image sequence. It means that you are able to set the node tree using one or few frames and render the whole composited animation out.
That is where it starts to get really neat, because you could do things like track a point in camera tracker and bind a mask to it. Then in the compositor you could include that mask, which now moves when the frames change, enabling you to do whatever you want in that masked area.

You can view render result and viewer node output in the uv/image editor, from there you can also save them directly. Backdrop shows what is connected to viewer node but I don’t use that. Showing it in the uv/image editor is clearer, can easily show pixel/image info and have access to scopes.

Thanks so much. I had a bit of reading to do to understand what you’re saying, but I think I get it now and am really glad to have this input. I wasn’t terribly clear on the actual role of compositing, to be honest, but I’ve watched a few more really good tutorials and learned about a few more features that make sense. It is similar kind of to some of the same techniques used in 2D art, it’s just done a bit differently.

You’re right. The word layer is really sort of hard to wrap your head around at first with so many uses for loosely related things. And it is INDEED annoyingly similar to PhotoShop’s layers and other 2D art concepts. I think that’s why it just wasn’t clicking for me. I like your explanation. Also, I didn’t know about linked libraries and that makes bridges SO many gaps for me.