Compositing versus video sequence editor for combining video file with own animation

I need to make a video where I take a movie file and add some of my own blender animation to it. Last time I did this I did the whole thing as a standard blender animation, with a “screen” object in my scene which had a video texture. I realise this is not optimal and that there are at least two better ways to do this: compositing or using the video sequence editor. I have not used either. My question is, what are the pluses and minuses of each of these ways? Can one say that either is easier than the other for a novice to pick up?

I can only speak of my experience, that for what ever reason when i composited in the vse the image was unstable. But in the compositor it mixed perfectly.

Trackings a bit off but you can see what ive done. In the vse the alien was shaking! I would prefer to do it all in the vse because you dont have to bother with nodes.

Thanks, useful info!

If it’s just one video file with no editing involved, I would use the compositor. You have a lot more control of, well, compositing–colors, blending, blurs, general fine-tuning.

If you need to edit videos, make cuts, etc. the vse will be easier, though you won’t have as much control over how the images are put together. If it’s a simple task of overlaying a 3D something on a video, the vse will work perfectly, but if you have complex masking or chroma keying, I don’t think to vse will work. I’ve personally never had issues with stability.

I think both are very easy and rather intuitive to understand, but if you’re used to traditional NLEs the vse might be more familiar to you.

Thanks - I don’t have to edit the video much and what editing I do need to do is probably easiest done outside of Blender. So it looks like the compositor is my clear option.

It’s your choice … and here is a “digital-ready bright line rule™” that might help you in your decision:

“Would a knowledge of ‘the 3-D world’ help the computer make a ‘better’ choice in this case?”

If the answer is “no,” then (any-at-all …) video-editor is fine, because the only world that it knows is “strictly two-dimensional.” (It can’t even distinguish between “live video footage” and “3D-generated content.”)

However, if “three-dimensional awareness, of ‘where the pixel at location (x,y) on such-and-such bitmap actually came from’,” would be a factor in making the “best™” decision, then you must somehow use the Compositor, because only the Compositor has access to this sort of information.

(And of course, it is perfectly plausible that you might wind up using both tools at once. Whatever turns out to be “the shortest distance between START and END” is absolutely fair game. All of Blender is “your oyster,” and now it is up to you to make from this: “(Yum!) Oysters Rockefeller!”)

Thanks, again very useful as this 3d info is indeed helpful for mixing my video file / animation scene. I suspect actually because of this I would find it challenging to do in the sequencer.

Now I’m hungry for Oysters…