Depth in Camera Tracking

Been using Blender for about a few months now, but getting into Camera Tracking which is my interest. Tracking indoors or anything close is not so difficult, but when doing a street, or a football field in a static pose (no walking or side, forward motion). I noticed tracking can be a problem. One marker is close to camera (1 meter away), and another marker far (for example 1000 meters away give or take). Both ground level. After solving, I see the far away constraint right above the other constraint that is closest to camera. Doing horizontal tracks is no problem, but only depth (far away) I have problems, and need a tutorial that gives these examples.

This my problem.

1.) High lighting markers. One close to camera, and other far away.


2.) After solving, then applying constraints, I get this. Lots of constraints that are to be far are beside each other, even on top each other. I have problems giving depth to these.


If the camera is not moving at all, or very little (i.e. you’re not walking with it but just panning and tilting) you’re not going to get a good track with the normal methods. When your camera is stationary you need to use a tripod solver. The issue wit the tripod solver is that it will project a bunch of trackers in a semicircle around the camera (like the image you show). Distance becomes a “guesstimation”. It still works but requires a bit more user interaction to get it right. It helps a lot if you happen to know the actual dimensoins of the place, camera lens, sensor size and all that. But for the best results on stationary (or almost fixed shots) ist to do a 2d camera stabilization and then use BLAM to solve the camera.

Do have tripod selected already. Kewl, thanks cegaton. Will look at BLAM (Where ever that is). :slight_smile:

just checke the wiki: (http://wiki.blender.org/index.php/Dev:Ref/Release_Notes/2.64/Motion_Tracker)
Tripod Motion
. It can be used for footage where the camera does not move and only rotates. Such footage can’t be tracked with a generic solver approach, and it’s impossible to determine the actual feature points in space due to a lack of information. So this solver will solve only the relative camera rotation and then reproject the feature points into a sphere, with the same distance between feature and camera for all feature points.

also have you applied the scale of the scene? you need an accurate reference of distance between two objects to match your scene to the clip. in the clip you show you only reall have the goal posts available to determine the scene size, so what are they? about 3 meters wide? under the solve tab select the goal post marker then under orientation, set the distance value to 3.0 ( or whatever the distance is between the posts!) then press the set scale button. dont forget to set the x/y direction and the origin point for your scene too. you may need more markers.

Yes, I applied the scale. I decided to only work on Camera tracking near scenes. Anything like a field (Football field, street, lake) while standing in area, panning left, right will not work in Blender. I least I know how to track, and understand the features. That BLAM plug in is awesome.

I have done many shots like this successfully… all you need to do is move the camera a couple of feet sometime during the shot and have your keyframes bracket that movement. For instance, in this shot, if you were to squat down, start shooting while slowly standing up at the beginning (keeping as many of the ‘markers’ in frame as possible ) it will give you enough perspective change to get a good solve for the whole shot. If I don’t want the movement in my final shot, I do the movement at the beginning like this or the very end and I just don’t render out that part. Of course, it helps if you already have accurate camera data…