vse tutorials

i watched this tutorial. and i also have seen some others like it.

it’s a nice introduction to the vse but the vse is seen kind of like a separate app in these tutorials.

i would be more interested in the best workflow if you do a complete 3d animated movie in blender. like the open movies did. how the vse plays together with the rest of blender. how to use scene strips, what proxies are and how to use them, how the whole rendering pipeline works then…

is there some introduction tutorial for that? :slight_smile:

While I don’t have much regarding 3d pipeline my tutorials (in sig below) I do cover more integration with other areas of blender.

thanks! i will look into them.

(actually i would need just a short description of how it is supposed to work. the details i could figure out myself.)

You make an animated shot in a blender scene. This can be imported into your separate vse editing scene. Here you trim the shot for timing while watching a opengl preview. Then when all your scene strips are trimmed in the right order you render out.

Thats the basic version.

can i also render out single strips? and what exactly are proxies?

You can render out from a (source) scene if you need a specific re-render. The VSE uses the render range defined by it’s start / end frames of the timeline. You can of course change that and render from the VSE but you won’t have a good filename convention.
Remember that renders take the output file name you define, so you could end up with a bunch of renders with the same header just different frame counts. That’s ok, unless you move the location of the strip later.
Proxies are low res versions of your scene strips. They playback smoothly on any machine, where as your high res frames may not.

What I have missed out here is:

  1. Audio handling between source scene and master (you can have sound in the source scene to animate too).
  2. High res pre-rendering of shots (you can create meta strips in the VSE nd swap between the OpenGL version and the high res master frames).

Also people do suggest that you render frame sequences from Blender for quality sake. The combine your sound and vision elsewhere (muxing). This is because Blenders built in movie renderer can be difficult to get good results from (ffmpeg implementation issues).

thanks! it’s a bit clearer now.

what still isn’t really clear to me is how the compositor plays into all of this.

i see how rendering frame sequences is important. but in a typical workflow i imagine that often i would like to adjust the compositing long after rendering a frame sequence. is it possible?

so far if i used the compositor it always did it’s thing right after rendering each frame. :slight_smile:

You can render a single file that supports layers. Like exr for example, you simply render passes into that file. Then later substitute the 3d scene for this layered image file. You can still access all the passes and it’s quicker to do comping with.

thanks! i will experiment with all of this. :slight_smile: