Motion Retouching by Tracking Image Layers?

Hi all,

I have been working with Blender for many years, yet for some reason I have never really been interested in the tracking features. Only recently it occurred to me that this could actually be pretty useful for me. So, I’d like to ask the opinion of artists who have some experience in using the tracking features…

This may (or may not) seem like an obvious thing to many of you. Personally, I have always thought of motion tracking as being a technique that allows me to insert or replace objects into a movie scene, so I really don’t know if what I’m thinking of is possible at all.

To make a long story short: Do you think it would be possible to use Blender’s tracking features for motion retouching? That includes, e.g., dodging and burning as well as cloning and healing (as known from Photoshop) in moving images. My first thought would be to perform the retouching process on a still image as usual (in Photoshop), then export the final edit as layered image. In Blender I would then like to use the tracking process to map the retouched layer to the original scene, so that all corrections are applied to the whole clip.

Is this possible at all? If so, can anyone direct me to good resources that would help me get started?

Thanks in advance.
Marc

That sounds possible, but it would probably take a fair bit of research and development.

If you built a 3d model and matched it to the video clip, then you could paint a mask onto that model. using the rendered animation of the painted mask, you now have a moving mask that matches the video movement.

I’d be interested to see how a healing pass would work over an animated sequence, I feel like there would be too much temporal noise in the dynamic patching, but I bet it would be a cool effect!

That’s also what I was thinking: The temporal noise could really be a problem, slight tracking mismatches etc. However, people are already doing this. See, e.g.: https://vimeo.com/88155643

Not sure, what software they were using. I was just thinking that it might also be possible with Blender somehow…

That’s basically what a lot of VFX is about, wire removal, matte painting, sky replacement, retouching of little things on a scene… Tracking is not always used to add 3D elements to a scene, sometimes is needed just to replace a phone screen, of to remove a person on the background…
The process wouldn’t be exactly as making the retouches in Photoshop and then use them in the movie clip, since you have to take the motion, deformation, and perspective into account, but it can be done.

I don’t know of any Blender specific tut for this kind of use, but Videocopilot.net has a lot of examples you can follow and apply to Blender, or search youtube and vimeo for Mocha tutorials.

Here’s some examples:

As I said, the process may not be the same for Blender, but it can be done with the current tools it offers.

Edit: here’s a breakdown of a video retouch made with Blender:

Thank you, @julperado, for the explanation. That second video is very interesting indeed and it seems to me that, indeed, this should basically work the way I was expecting it to. I.e., track a part of the skin, create a clean version of one of the frames in Photoshop and map it back.

However, that video also shows that lighting can sometimes be a problem: In Mocha Pro they fake the lighting change (while she turns her head) by performing some kind of linear interpolation while tracking. I don’t think this is possible in Blender? Or is it?

Well, I’m speaking from my understanding since I’ve never done this myself, but I believe that it would be easier in Blender because it is a 3D app, while Mocha and After Effects are working only in 2D, so you have to “cheat” with the lighting and the movement… But I believe that using a 3D shape with simple lighting that match the lighting of the original scene you can get away with it.

That’s basically what I did with this marker removal technique. It works best on surfaces like asphalt or grass.

Basically, the video is split into two noodles. One goes directly to an Alpha Over node. The other goes through a Translate node before going to the Alpha Over node. By translating the video to the side a bit, there’s now some “clean” asphalt on top of where the marker is.

For the mask, I created a new Scene File and I placed a small sphere at each of the marker bundles in the 3D view. I used the Alpha output from this Render Layer node (the black image with white dots) as a mask which got plugged into the Fac socket of the Alpha Over node. This mask samples a small circle of asphalt from the “clean” (translated) image and places it on top of the marker in the untranslated image. I blurred the Mask a little to help hide the edges of the little circles.

Steve S

This is a widely used process to do different paint fixes and add set extensions etc. Basically you can divide it into two categories: 2d and 3d approach.

With 2d you use tracking information to matchmove patches, clone offsets etc. I mostly use it to do live offset cloning so that clone patches are also “live” with grain. When you use static patch you must manage grain yourself.

3d approach uses projections to proxy geometry. Then you do your painting work in UV space (which basically locks the movement when camera solve and geometry are accurate) and finally rerender your proxy objects with texture fixes to get your final patches moving exactly as they should with perspective changes etc.

How easy it is to do in Blender…? Hard to say as I personally use other software that is more suitable for this kind of work.

This is very useful for scene relighting (track in shadows as required) or dodge in a burnt out window. Also I have removed talent and light fixtures as well as boom mic this way.