Help planning a live action lip-sync to audio

Hi, I have almost complete the 3d portion of a music video I’ve been working on.

I have also recorded the live action lip sync video that I was to add in.

I think my best bet is to try to composite the video into each scene one at a time.

I have the 3d elements of all the scenes setup in the VSE and they playback in sync with the audio. However the Live action video will need to be cut up to match the scenes before I can do the composting.

Any Ideas of the best way to extract the frames from the live action in a way that helps to line them up the the existing audio?

The only thing I can come up with is to create a new VSE Scene that’s a copy of the existing one. Then remove everything but the audio, add the Live action clips with their audio and use the waveform image from the Live action to line up things with the existing audio clip.

Then I can render out png files of the correct scene start and end points for each scene that they are going in.

Does it sounld like I am on the right track or is there a better way to do this?

Thanks

I hope this is the right forum for this question if not please move it where it needs to go.

Sounds pretty good to me. Don’t forget that you can send the render output of a 3D scene (including compositor output) to another scene with a VSE via VSE scene strip. You can also perform an OpenGL export from there of all strips cut together.

Thanks 3pointEdit. I have extracted and composited the first bit of the live action and just rendered the png files. I tried to use the scene as a strip in VSE but it will not playback real-time. That’s not too surprising though.

Blender almost crashed on me after working in the VSE for a long time. I checked and Blender was using almost 13G of ram out of the 16 on my system. I was patient and was able to get it to save the blend file and restart.

It makes me wonder if there is a memory leak in the VSE or if it is a function of the sequencer cache. I have the limit set for 8G but evidently Blender did not respect that limit. Or, since I have 2 VSE scenes and was using one to line things up before adding them in the main vse scene. Perhaps Blender uses a separate cache for each VSE scene.

Should I report this as a bug?

I would report it. It probably works correctly but memory usage might be related to the 3d scenes your have open.

Just a follow up post.

I have extracted frames from several of the live action movie clips using the process I described above. I did most of it scene by scene in smaller groups. One nice side benefit from this process is when I render out a series of frames the starting and ending frame numbers match the scene. So adding them in the VSE is really easy to line them up perfectly.

Small bump just to say:

I posted the video using this process in the Finished Work Forum in case anyone is curious about the results.

Have a great day!