Using VSE as a video editor -- can't combine scenes into a sequence?

Is it possible to use add/scene to add three VSE scenes to another sequence, then render that sequence to a video file?
If so, what am I likely to be doing wrong, so that I’m either getting a “no camera” error or my video is coming out blank?

Details:

I created a blend file with three scenes, each of which has nothing but a VSE sequence:

  • Intro
  • Body
  • Outtro

In all three of those sequence scenes, I dragged in a combination of audio and video files, then added effects (notably multicam, but also gamma fades). All three produce the expected video (MP4/H264/AAC) files, which play just fine.

To combine them together I created a scene called “ALL” and made a video sequence by adding those three scenes with add / scene. When I click “Animation” to render “ALL”, it says: No camera found in scene “Intro”. I added a camera to “ALL”, then selected it as the camera for each of the three scenes. Now I don’t get an error, but the resulting video is black. Audio’s fine, but no video.

I am able to create a video by rendering each of the three scenes to video files, then dragging those videos to the VSE in yet another scene (“ALL from mp4”), and that works. Doing it that way will reduce the output quality, though, unless I render each of the scenes to a lossless format, which is going to use a lot of disk space.

What I’ve tried so far:

  • Searched these forums and Google every way I can think of
  • Made sure every scene has the same resolution, frame rate and output format settings
  • Made sure every scene has “Sequencer” checked under “Post Processing”
  • Un-checked “Compositor” for every scene in “Post Processing”

No joy so far.

Can you see anything in the VSE preview window of your master scene? Can you render a still frame?

The preview window shows black for tracks containing inserted scenes, and medium gray tracks containing only effects. Individual still frames render as all black.

Attempting to simplify the test case, here’s what I did:

Um, well, there was a list of what I did, but somehow I got logged out, and the whole thing (several paragraphs and point form lists) got lost except that one line of text. I should have copied everything to the clipboard before submitting – live and learn.

So here’s the brief version. Files are on Google Drive at http://goo.gl/dXjR6p – vseMulti.blend and two WMV videos. I created a fresh blend file, then created three scenes: Scene 1, Scene 2, Master. I dragged a WMV file into each of Scene 1 and Scene 2, and added those scenes to Master, with Scene 2 starting after Scene 1. Both scenes render OK on their own, but when I click “Animation” for Master, it says no camera found. Adding cameras (not included in the files above) allows it to create a file, but the video’s all black (audio is fine).

This is just about the simplest possible case involving a scene being added to another scene in VSE. The only additional simplification would be to create only Scene.001 and Master. Trying that now… same thing. I added evenSimpler.blend to the link above. It requires only scene1.wmv.

O h sorry , I just re-read your post. The VSE is not designed to access another scene’s VSE output. Infact the VSE output is not available to any other tool in Blender.

You can get audio from another scene as the VSE serves 3 purposes.

  1. Edit together external media and renders from Blender.
  2. Edit together animation from another scene’s 3D view or Compositor output.
  3. Creating an audio sync track for animation. eg. music or voices etc.

So you could have voice talent in a scene with an animated character lip syncing to the audio, then cut that down in another Master VSE scene. But you cannot have access to media from that scene’s VSE.

This functionality was removed from Blender back in 2.49 days and is not considered a bug.

OK. I think I can come up with a workflow using meta tracks to sort-of accomplish the same thing. It just means if I want to modify a scene I have to un-meta it, modify, re-meta then copy to the master sequence.

Thanks for the clear answer.

BTW check out the blog in my sig below for more tips on using the VSE to edit video with. Good luck.

Thanks. I had already spent some time on the blog, and I had stumbled on some of your YouTube videos. I just subscribed to your channel and looked at “Subclip example for VSE”. I’ll definitely try the “jump to cut” add-on today.

Here’s my story. Maybe what I was trying to acheive with VSE-scene-within-VSE-scene is best done some other way.

We needed a “quick product tour” video at the company where I work, to post on our new YouTube channel, so the obvious thing would be to get a sales guy to demo the product while we roll cameras. The sales guys are of course fully occupied selling (and usually out of the office), so it came down to me (computer programmer) as writer / on-screen talent / editor / 3d animator, and our IT guy (who’s also an accomplished photographer) as director / cinematographer / lighting director. It’s an interesting change from what I normally do all day, but it’s definitely a challenge.

I downloaded Blender 2.70 the Friday before last, and surprised myself by managing to rattle off a usable logo animation on the first afternoon. I had spent a day trying to learn Blender 2.40 (I think it was) some years back, and found it completely non-intuitive. This time, with 2.70, was much easier! It took a few more days to figure out rendering for a product model exported from SolidWorks (lots of re-texturing) and to come up with an animated title sequence. While I was using Blender for that, I had a look at the VSE, and particularly liked the multi-camera thing. We were thinking of using Adobe Premiere CC, but we’re giving Blender a chance, at least for our first 1-2 videos.

3pointEdit, here’s where it starts to look like the stuff on your YouTube channel.

We set up a cheap studio in a spare office, and spent a couple of hours yesterday rolling a couple of cameras, an audio recorder and a screen capture from the product itself (using ffmpeg / x11grab, which I had to tweak a bit to make it work over a LAN), while I struggled in agony to get my lines, if not right, then at least sort-of acceptable. A real broadcast personality would no doubt have done it in 1-2 takes, but I’m not that guy.

So today I’m looking at a timeline in VSE with tons of clips on it, which I have synchronized mostly by sound (the screen capture had no audio, so it’s painstakingly aligned by looking for changes on the screen in one of the camera feeds).


That part was really quick and easy for the audio tracks, so Premiere CC’s automatic audio alignment feature doesn’t seem like much of a time-saver. You just need something else to do while you’re waiting after checking “draw waveform” each time (such as typing what you’re reading now). Next time, just before I clap my hands I’ll say something like “time synchronization, instrument says 12:34:56”, so I can at least get the screen capture within a second (and probably more like 100ms) without so much effort, using the clap sounds.

The main pain to this point was that one of my cameras can’t record 1080p-30, instead recording 1080p-60, so I had to transcode it to an almost-lossless format with a frame-rate change, in order to line it up in Blender. I used ffmpeg for that, because I couldn’t figure out how to get Blender to skip every other frame for just one track.

Anyhow, now I have all these time-synchronized tracks, and I need to:

  • isolate the parts that are going in the final video, which might be several clips from one video
  • use the multi-camera selector to switch between cameras as necessary during each of those
  • maybe use some transitions at those camera changes
  • get all the thus-processed tracks onto a VSE sequence (this requirement is what prompted my post here)
  • add my intro sequence, some overlaid animations and an exit sequence
  • add transitions
  • save video

I will look at whether “jump to cut” can do what I need. I read the “VSE scripts you can’t live without!” article on your blog, then installed “extra sequencer actions” and “jump to cut”. To get something out the door for review today, though, I think I’ll wind up animating each excerpt to MKV/HuffYUV/PCM, then setting up a video sequence to combine them. Once I learn some more, I hope to eliminate the need for giant lossless intermediate video files.

I also need to do something with the audio, but that’s another story. I’m hoping we don’t have to re-shoot. I should have used a shotgun microphone, I think, to get rid of echo and diminish fan noise from the product being demonstrated.

  1. Lapel mic or very close mic is best as the inverse square law means that the voice should overwhelm background noise. But cabling or dual recording can be an issue.

  2. Good on you for trying the vse. But this would go together much easier in a real editing tool.

  3. I suggest that you use the addon http://blendervse.wordpress.com/2012/09/10/vse-blender-addon-video-editing/ and use this technique http://blendervse.wordpress.com/2012/09/04/vse-navigating-your-clips/

  4. 60fps down to 30fps in the vse just use the speed effect although blender should conform the frame rate to match project settings. It didn’t use to do this.

Further, if you are synchronizing non sync elements that have differing frame rates, I would suggest pointing all the cameras at the instrument (screen cap) and performing a large change like closing a window. At the same time do a countdown for the audio recording, leaving the click for the 2 count. The number 2 is a sharp explosive sound that makes frame hunting easier than going on 1, which is softer. Remember not to stop recording at this point to preserve the sync between recorders.

The big problem with a lapel mic is that I don’t have one. I do have a shotgun mic, and I think it would do better than the omnidirectional stereo mics built into the Tascam DR-05 recorder I used.

I’m also very slightly concerned that if I’m my normal fidgety self, a lapel mic might pick up a lot of rustling clothing and that kind of thing.

In terms of ease-of-use, Blender is a HUGE step up from avisynth, which I used for this video :eyebrowlift::

Of course I wasn’t on company time when I did that one.

The problem with cutting on import is that I need to synchronize a bunch of tracks based on a clap that might have happened ten minutes before the scene I’m cutting.

To get going yesterday, I did it the horrible way:

  • synchronize everything in one big sequence (shown in an earlier post)
  • select start and end points of a scene
  • animate to MKV/HuffYOU/PCM (lossless)
  • repeat for each scene
  • convert all of those files to an almost lossless xvid format, because when I import the HuffYUV files into Blender, it only picks up the first 100 frames, and I didn’t want to take the time to figure out why just now
  • make a second sequence, in a separate blend file, to bring in all those clips, and similarly animated 3D clips, and some other sound effects, and whatever, to make the final video

I think I can get rid of the intermediate files using some of the extensions you suggested (thanks for those!):

  • select start point of a scene
  • select all strips through cursor
  • K to break
  • same for the end point of a scene
  • stick the cursor in the middle of the scene
  • select all through cursor
  • control-G make meta
  • control-C copy to clipboard
  • go to other sequence in the same blend file
  • control-V to paste
  • repeat for every scene
  • add effects, sounds, etc. in that second sequence

I plan to try that tomorrow (back to my normal programming work today).

So maybe they’ll implement automatic speed adjustment in a future release?

All good suggestions, particularly that I should bang on a key on the test set to make the screen do a big change, right before or right after the clap. I think only one camera needs to be aimed at the instrument, though, not all of them.

I found a simple hand-clap got me synchronized well enough that I couldn’t hear any echo at all when I played all three soundtracks together. That surprised me.

Did you know that blender can cut a multicam stack of sequenced strips? You could sync them all then meta them together. Then cut the meta to length. Then unmeta the strip and perform a multicam switch to make it more attractive.

I’ll be getting back to the issue under discussion here, but meanwhile we kind-of switched gears, deciding to make a relatively simple “welcome to our channel” video the first thing on our YouTube channel. No synchronizing of recorded audio and video at all – just Blender models, still images and a bit of GoPro video I shot on the weekend.

I just finished that one (https://www.youtube.com/watch?v=BOnnXQ8vnXA), so I’ll be back to the other style of video in a few days (doing some computer programming in between).

Did you know that blender can cut a multicam stack of sequenced strips? You could sync them all then meta them together. Then cut the meta to length. Then unmeta the strip and perform a multicam switch to make it more attractive.

I’ve being playing with that sort of method, and yes, that’s more-or-less what I’m findng works. I have an additional step at the front, with a multicam selector happening before the meta step, for camera or video-source changes that happen within a particular clip.