Feature request for NLE: dynamic slow-motion & post pro motion blur

Hi, i just wondered if anyone here knows about Slomovideo - http://slowmovideo.granjow.net/
It is an opensource software that lets you slow down or accelerate video (image sequences too), recalculating the tweened frames. It gives really impressive results!
It also has some obscure math vodoo to create effective motion blur, handy especially if you are accelerating frames (or if you forgot to enable it in cycles ;))

I don’t know if the license permits it, but if some dev around here is willing to work on the NLE, this would be a worthy project IMO.
Remember the time when we didn’t have video tracking? Well, actually we miss something like this in Blender

There are lots of implementations of this motion vector/motion estimation stuff…

After effects has it natively when you retime it, it was recentlly added to final cut X, or there’'s Kronos retiming plug in from teh foundry and of course reelsmart motion blur plug in for after effects…

When it works it’s great but often it gets very ugly (especially on cuts or when large areas move outside the frame…)

Reelsmart is pretty good with its masking functionality allowing you lots of manual control to fix problem areas, the others are pretty basic!

On another note, when using the blender defocus plug in I’m wondering now that we have an infill node whether it could be improved for fore ground objects going out of focus over background objects?

the foreground area could be “smart filled” with background and the blurred foreground added over the top… should be a bit more robust than the ugly artefacts we get now!

There is actually surprisingly little math voodoo involved. It uses the same algorithms used in video compression. It determines the optical flow, that is, it find patterns and computes how they move from frame to frame. In video compression this is used to see what can be thrown away in such a way that one can still recover a good approximation of the original. In slomo, it is used to generate frames that were never there. It is a good idea to try to implement this in the nle as a common complaint about the nle is that it doesn’t play nice when using footage with different frame rates. Perhaps one can even convince the slomo people to contribute code.

Where I said slomo people in my previous post, I should apparently have said the slomo guy. It was coded by one guy as his BSc thesis project. I just downloaded his thesis. Should be nice holiday reading.

@Michael W
Blender has masking functionality already in the clip editor. It may come useful for the purpose
Do the solutions you mentioned feature the motion-blur trick? I tried it a bit and, if used in a non-aggressive way, it gives good results, restoring a convincing mblur to rendered footage

@PhysicsGuy
Glad i found you something to read… :wink:

I do a lot of fast action with cars in video games and found that post pro estimated motion blur techniques often can’t keep up and give terrible artefacts…

with reelsmart It’s more comprehensive than just masking… reelsmart offers lots of control to fix shots that go wrong:

For all hosts: Support for foreground and background separation using a specified matte. ReelSmart Motion Blur then uses proprietary filling and tracking techniques when working on the background layer, even when it is obscured by the foreground! * For all hosts: Up to 12 tracking points can be specified to help guide RSMB’s motion estimation. By using the tracking points you can explicitly tell RSMB where a pixel moves from one frame to the next in order to guide RSMB’s calculation of motion vectors. You can set the position of each point at each frame by hand, but more importantly, these points can often be positioned from frame-to-frame using the host application’s point-tracking features. * For After Effects and combustion: Plugin included that allows you to blur with motion vectors supplied by you… which, most likely, will come from your 3D animation system. * For After Effects and combustion: When RSMB exhibits tracking problems, you can guide RSMB by simply creating and animating shapes to show RSMB where objects are actually moving. Interactive feature registraion is directed through the host program’s drawing and roto tools (splines and polylines), not through a grid of mesh points! As such, there is no new interface to learn.

When these tracking algos go wrong it can be really hard to fix with most automatic methods… allowing foreground and background hinting and being able to correct areas that go wrong by providing hints with masks make a big difference…

As for the dpeth of field stuff I was mentioning it’s a similar technique to object removal or photoshop/gimps content aware fill…