3D pointcloud deshaking a GoPro movie.

Interesting article, where a gopro video is deshaked by creating a 3d scene, out of the video shotage, by some Microsoft guys. I’d never thought some instant point cloud deshaking application would enter the video world.

It seams they plan it for their phones, i wonder with a little bit of python… and blender … :wink:

here is the link
http://www.theregister.co.uk/2014/08/11/microsoft_fixes_all_those_shaky_gopro_vids_nobody_wants_to_watch/

The spatialising is killer, I doubt Blender will ever approach this result. Although I hold out hope for mesh deform and retargeted motion. Like this https://www.youtube.com/watch?v=3TlCGh5Pc90 but you couldn’t really expect to add 3D elements to it as there is much distortion. Have you seen a recent patch for Blender that improves upon the basic tools?
https://github.com/Ichthyostega/blender/wiki
and the recent results:

This reminds me of Microsoft’s Photosynth from the way they reconstruct the image (and the artifacts that are generated from it)
With Photosynth you can make 3D panoramas or 3D “walkways” from images
http://photosynth.net/preview/view/8c30b58b-7949-470e-85fa-5701a6ac1e9e

Seems like they took the same approach/tech and applied it to timelapses, with additional stabilisation.

@3pointedit, wow never knew blender could do that, it always keeps me amazing.
Have you made that ???

I mostly use virtualbud + deshaker, it does do a good job but you cannt set what to keep as stabilized subject.
In some situations i think this approach would have been better.

When looking at the youtube movie. I wonder the method used would always need a fairly large zoom into the frame.
With virtual dub, it can use past and future frames to fill in borders, it runs first an analyse run then tries to find an averaged deshaked path to minimalize the zoom. …if this could some draw on a larger canvas, with some border checking. It would benefit from it.
most small hand shakes are within 50 frames (25p), its just a tip

@3pointedit Very interesting link, I hope some of those tools make their way to the code of blender eventually. I just want to point out that most of the operations this person is doing (except for the multiple keyframes for rotation) are indeed possible on current versions of blender by adding a transform node after the stabilizer node to reframe, rotate and resize and animate the parameters.

Sadly blender is not temporally aware so it cannot sample frames other than the current one. Unless you force it to by offsetting input nodes.

is it possible with blender python to directly access image result pixels …?
(i guess would be slow as such things are usually done in C/c++)

At first glance, I took that sentence to mean that blender is at this moment not selfaware. I think I have watched too many Terminator movies…

Works for that too, that is a series of movies about temporal relationships :slight_smile: