Integrating compositor and sequencer: what do you think?

Just noticed this on my g+ feed:

what do you guys think?

Not the first to suggest any of this. Ton doesn’t like it as the concept of the VSE is as a stringout finisher, where all the fancy art stuff from the compositor happens before editing.

The workaround is to do your composite in its own separate scene and insert that. Making this a matter of a few clicks would be super-keen: select the clips; run “Create Composite Scene From Clips”; have Blender insert a new scene of the appropriate length in place of the relevant footage, and the footage is inserted into the compositor’s node inputs. It’s hacky but workable as long as you’ve done your edit.

I’d find it a much more flexible workflow to be able to send clips or entire channels through a composite chain as the equivalent of a custom Effect, or even the ability to send ALL the channels through a node setup, analogous to a mastering chain in audio. Other software does this (e.g. Sony Vegas, any digital audio workstation) on a per-clip, per-channel and per-everything basis. It allows for a lot more freedom and flexibility and much less stuffing around going from screen to screen incrementing and decrementing parameters.

Codewise it’s possibly a huge architectural annoyance to get the VSE feeding data into the compositor; presumably you’d need a special input/output (“VSE Clip/Channel/Master Input” and “VSE Clip/Channel/Master Output” nodes), and a different nodetree type alongside materials and textures. It would increase the VSE’s overall usefulness a lot, I think, but it’s not exactly low-hanging fruit from a feature/development perspective and I recognise that fully.

Still would be really nice to see. Nodes are nifty.

I second this. For a vfx workflow this would be really useful. In the same way that you can click use nodes in the materials, you could click use nodes in the effects strip properties. That way, the effect strip would only be there to animate when you effects chain would be applied and when it would be bypassed. Similar suggestions have ofcourse been made for the modifier stack. I don’t know why Ton doesn’t like it. He usually turns out to have a good reason. I just don’t know it.

Many things that seem very straight forward, makes sense stuff to do is actually harder as far as implementation in the underlying code. That said, if someone has a good idea of how it should look and operate, we could mock up some ideas of the workflow.

We have/had the code for this. Brecht wrote this feature already, though I couldn’t find it in our tracker (checked quickly).

Oh, very interesting - I suppose if it becomes necessary in Gooseberry, then we might see it again?

I’d be interested in pinging this off someone in the know about Compositor and VSE to see how architecturally difficult this is.

Strip-only workflow would be something like:

New:

  • Select video strip(s) to add effect to in VSE
  • Add menu > “Node Effect Strip” > Create new…
  • Info panel in Properties menu contains Edit Strip, Blend, Opacity, Channel, etc - just like current setup. Only panel that really changes for this is “Effect Strip” which might have a single-button shortcut to get to the Node Editor.
  • From Node Editor, switch to “Node Effect Strip” mode. In Node Editor, “Backdrop” preview data is sourced from the cursor position within VSE, or from start/end of clip depending on what cursor is closest to. Should be possible to run compositor and VSE on parallel windows this way.
  • The node itself should contain a VSE Input node and a VSE Output node, analogous to materials/compositor.
  • A Node Effect Strip may contain anything normally at the disposal of the compositor, including reusable node groups.

Existing:

  • Select video strip(s) to add effect to in VSE
  • Add menu > “VSE Node Strip” > Select from list
  • Blender either adds strip or not depending on how many inputs it needs. If it doesn’t have the right amount, it moans - just like it does with the in-built Effect Strips

Pertinent Questions:

  • can the VSE internally handle stuff OpenEXR Multilayer or is it strictly RGBA only? Impacts what VSE Input/Output are capable of.
  • what’s the expected performance impact? will it consume so much memory/CPU that it’s not even worth…

Oh.

Well, there it is.

Point 1. wait till Troy reads this… whoo boy. He would argue that the VSE is for timing decisions not value adding. Also there are concerns about the quality of output from the VSE/ffmpeg.

Point 2. the usefulness of nodes in the VSE would be integrating/combining multiple strips. Otherwise the masks and modifiers are ok for color grading work. I’m not sure the proposal really solves this.

Could I suggest: a VSE strip that collects the sources below it, then reports the result as an output. In the compositor The strip would appear as a EXR type multiple source node. This would mean that it’s media would receive timing from the VSE.

This timing issue is the crux of the problem however. How do you sync current frame values for strips in the master VSE scene AND ghost those properties to the input of the compositor in another scene? Will Depsgraph allow this?

Also it would be useful if the VSE strips could be accessible elsewhere as a datablock, available to textures and compositor. But I gather that is the hope of Sergey’s Movie Clips. They are generally more available throughout the UV/Image editor and compositor, even the VSE. Sadly they don’t have audio integration or frame handling (offsets etc.)

EDIT:

I think that once the compositor gets Canvas Aware (interactive image display) it would be a nice fit for image editing in the VSE, like scale and translate etc. However that brings up 2 issues.
Benefit: Code one good tool not duplicate tools.
Drawback: Speed of review, VSE fast - nodes s l o w . Be nice to have Realtime playback by dynamic scaling/frame dropping to reduce CPU/mem overhead.

yep, regrefully enough this appears to be the case. On the reddit AMA session he stated that the purpose of the compositor is to focus on the frame while the focus of the sequencer is on the shot.

could you please be more specific? Code for what? Integrate the sequencer with the compositor?

I found the original proposal interesting but it should be noted that it’s already possible to do most of what he covers through addons. It’s kinda hackish sometimes but it works.
Of course addons are not the way to go, the functionality should be integrated into the program.
I guess that as a community we should come up with a solid proposal and then lobby and hope it will be picked up.

that would also mean that the composition would have to be rerouted to the VSE to replace the original sources and render out the final video (unless one chooses to mux in an external app).
I’m not sure that collecting all the sources below would be an ideal solution. I tend to think that it should work on a per strip basis.
Ideally the vse strips should be converted to image frames (such as EXRs as you mentioned) which are then automatically fed to the compositor. Upon completion, the composited strips will have to travel back to the sequencer as image sequences and replace the original video strips. While this workflow is already possible in discrete steps, a single button would be very convenient :slight_smile:

My point is that the compositor be used to add shots together using the vse to determine relative timing. The compositor scene’s output would be routed back to the collection strip.

I specifically avoid generating additional media but prefer to send timecode instructions between the scenes. So changes to vse timing can be reflected in comp. Rendering a frame seq may increase render speed/memory handling but what happens when you need longer media?