Camera Tracking: My 3D object slides around

Hello everyone,

I completed a camera track, but when I scrub my video, my 3D object slides around.

I did a solve, got a solve error of about 0.5, set the floor origin and scale, put the solver constraint on the 3D camera. I’m using Blender 2.67

I saw a post on this subject which said to remove all trackers with a high error. I did this, but still the same thing. Most errors were between 0.1 and 0.9 I removed three trackers that were above 1.0. By the way, I had to do this manually because the “clean tracks” button did not work.

The camera pans from one side of the deck to the other. The 3D object is my friend’s new deck that will replace the one you see. However the new 3D deck looks like it is in a race with the camera to see who can reach the other side first !

Attachments


To get an accurate solve you must have some lateral changes. That is some parallax changes. Unfortunately a pan does not really include that sort of information for Blender to figure out your scene.

Try solving as a tripod instead.

It may be just the scrub. Had the same problem myself, then when rendered in the compositor i got a solid track. Blender doesnt work as well with video as it does with image strips. If not, probably better using the tripod feature.

You could also add the movie clip as a strip in the VSE then add the 3D scene as a strip and mix 50% to get a quick preview in sync.

Hello 3pointEdit,

I thought that a pan would work because there is a camera tracking tutorial on youtube by Oliver Villar that uses the same camera motion. Here’s a link -> https://www.youtube.com/watch?v=5HvLTJ4Zgo0.

When you say “lateral changes” do you mean that I have to move towards (or away from) the deck ?

I tried the tripod solve by checking the “tripod motion” checkbox, but my solve error went up to 35.

Koumis, 3pointEdit,

Actually my “video” is an image sequence.

I tried going into the VSE, adding the image sequence, the 3D scene, and an “alpha over” strip. However, same result.

I’m not sure what you mean by “rendering in the compositor”. When I did the solve, I clicked on the “Setup Tracking Scene” button. This automatically put in some “mix” nodes, some “render layer” nodes, and an “alpha over” node in the node editor. So I am already using the node editor for the render.

What i mean is once rendered out it worked.

The following video is what i mean. The dinasour at the fence kept slipping. Ignore feet slippage thats the walk cycle. But once i did the final render it worked. When the camera panned to introduce the second dinasour that was the tripod feature. The solve was not great but still worked once rendered out.

Not sure why yours is having so much problems, with tripod the track should be a breeze!

Or side to side, which would probably be better. You can always edit that part of the video out after you’ve done your track.
You need parallax to determine the distance between things. The reason your brain has a sense of depth perception is that each eye is viewing the world from a different position. The camera, however, has only one eye, so you need some lateral movement to get parallax. If all you do is rotate, the camera doesn’t know how far away something is. It’s basically seeing the world as a flat 2D image projected onto a sphere that surrounds the camera.

Steve S

Steve S.,

… so with the lateral (side-to-side) movement, the distant trackers move by at a slower speed than the trackers close to the camera. So Blender can calculate “the distance between things” from the difference in the speeds of the distant and close-by trackers. That difference is what is known as parallax.

Did I get this right ?

Great video Koomis,

You did a great job fixing the slippage.

Regarding the “tripod feature”. My understanding is that you click the “tripod motion” checkbox when you are rotating the camera only on a tripod, or when the camera is on some kind of sled on a track. The kind the professional studios use to eliminate camera shake.

Yeah, that’s basically it. If one object moves more than another, it must be closer to the camera. And by providing the Focal Length of the lens, it can determine the relative distance between everything. That’s why accurate camera data is so important.

Put your camcorder on a tripod and pivot it around. You’ll notice that everything moves in unison. So the tracker thinks that everything is the same distance.

Steve S

My camera is doing a lateral (side-to-side / panning) movement, with distant trackers moving slower than the close-by trackers. But for some reason, Blender can’t calculate the paralax correctly. I’m guessing I put in the right camera data (focal length, img sensor values) because of the 0.5 solve error.

I’m plain stumped.

Just for clarity that is a trucking or dolly shot not a panning shot. Panning is a pivot above a nodal point eg. tripod rotation.

Otherwise, perhaps you have the lens length wrong? You can force Blender to check that when performing the solve too.

Well i dont know why my tripod solve worked so well, after reading this it should not of worked, it was hand held and a little shaky???

3pointEdit,

This is from the specs for the camera I used:
“Image pickup device: 1/2.35” CCD (primary color filter)"
“Lens: Olympus lens 6.3 to 31.5 mm, f3.5 to f5.6. Equivalent to 36 to 180mm on a 35mm camera.”

I put in 35mm for my “image sensor size” and 36mm for my “focal length” (I did not zoom in at all). I had Blender guess a better focal length, it came up with 40.7mm. Blender also guessed a K1 and K2 value. With these new values, I was able to get a solve error of 0.57

… a trucking or dolly shot ? I had a feeling that I was getting my terminology wrong. I hope I didn’t confuse everybody.

Good work on using the K1 and K2 to solve it better! Amazing how different Blender thought the lens was. The other thing to consider is the evaluation frame range, does it have the most parallax change possible? That is a marker close and far away from camera.

The evaluation frame range was a pain in the backside. There was no particular range of frames that had the greatest paralax change. I was trucking/dollying the camera at a contant speed. So I just chose a random frame range.

The part of the shot with the most depth seems to be the right hand frame with the walkway towards camera. Unless thats the end of the shot perhaps you could use close markers there for the eval range?

3pointEdit,

I changed “keyframe A” and “keyframe B” to be the interval of frames that show the walkway. Blender kept telling me that “at least 8 markers between both frames are needed for reconstruction”. But I have more than 8 trackers tracking ! I’m guessing that some of the trackers are not being counted. How can I find any trackers that are not being picked up by the solve ? The system console shows nothing, and my “clean tracks” button doesn’t seem to work.

I did keep trying different frame intervals (around the walkway shot your mentioned). I got Blender to accept one interval, but the solve error jumped up to 2.

However,