How far can I push Blender's camera tracker?

I watched two long tutorials on Blender’s camera tracker, but I still can’t get a decent track. Perhaps I’m expecting too much from it, but it’s not such a difficult scene. It’s only a bit more difficult than the typical parallax walking sideways scene but not by much.

Basically I walk around a corner of my house towards the front yard, at which point I’m shooting video mostly forward but a bit at an angle towards the left, and then I turn the camera around to where I was coming from. After that, I slowly turn it back again in the opposite direction.

I tried all kinds of approaches, from not too many tracks, lots of tracks with the detect features function, big markers with a really big search area, etc. The footage was exported from After Effects as a TIFF sequence.

Is there any advantage in using any of the other motion models that are not just location? Perspective maybe? I’ve been trying since yesterday with another similar take and I shot it again today with a less complicated motion to see if it tracked better, but no.

Any suggestions on things I can try, or good tutorials? The ones I watched were the Andrew Price’s hole in the asphalt one (very good but it was a simpler motion) and one from Blendertuts where the camera travels sideways across a basketball court.

Thanks

I’m pretty sure you are mixing several types of solves in one shot. It might be better if you cut up your shot into forward and panning sequences and solve them separately. I don’t think the tracker can change the solve type in the middle of a sequence. This is not a simple shot, it’s very complex even if it doesn’t appear so at first. :slight_smile: You might have to change how you are shooting the shot to get better results.

What you describe does indeed sound like it might be a little bit difficult.
It sounds like a lot of the features that track will leave the frame due to the turning and the corner. That is always a problem.

Anyway, the different motion models only apply to the single markers. It’s only when the track itself is difficult, but for the 3d solution they do not matter at all.
The most important thing for the solution is to find the right keyframes. You can try the “auto keyframe” button to let blender figure out the best combination.
The other thing of course is to find the correct camera settings. But even if you do not know your camera data at all, you can get a decent solution, by enabling “Refine”. Choose FocalLength and K1/K2 from the menu.

Hint: A good way to learn tracking is still the TrackMatchBlend DVD :wink:
Get it on the blender cloud!
https://cloud.blender.org/training/track-match-blend/

Could you post the video you’re trying to track? Might make it easier to figure out what might be going wrong.

+1 for TrackMatchBlend on the Blender Cloud. That Sebastian guy knows his stuff and rumour has it that there’s extended content in development that covers newer features.

Moved from “General Forums > Blender and CG Discussions” to “Support > Compositing and Post Processing”

Thanks, I might try to shoot it again today. However, if I segment my shot like you say and solve them separately, then I wouldn’t be able to match the 3 solved cameras into one, or would I?

Well, the corner itself doesn’t seem to be a problem, but when I turn the camera around everything goes haywire, and the camera tracker in After Effects does great until that part but then it goes crazy after I turn the camera around the first time. That seems to be the problem, so maybe I’ll shoot it in a different way. It’s not a big deal, I’m just doing this to learn, but I was trying to make it look as realistic as possible.

As for the camera settings, that is the tricky part, because if I shoot with my Canon 60D I know exactly the focal length, but DSLRs have a soft look I’m not too crazy about, especially when shooting grass. I love the look of DSLR in other aspects, like colors and depth of field, but for tracking that softness doesn’t seem to me that it would track well, so I’m using my Canon XF100, with the shutter at 2000 to try to get as much detail as possible in each frame, even if I shoot at 23.98 fps. I guess perhaps shooting at 60 fps I could get better tracking, but I like the look of 24 fps.

The problem is that the XF100 doesn’t include in the EXIF the focal length information. I had checked the manual and it says that the focal length goes from 4.25 to 42.5mm, but I think the 4.25 must be for when it’s in macro mode, because with the zoom all the way back, it doesn’t seem to me that it’s so low. I mean, it’s about the same focal length than when I shoot with my variable lens in my DSLR, and that lens is 18-200mm. So it can’t be 4.25, that would make it look like a fish eye. Below that it says 35mm equivalent: 30.4-304mm, but it doesn’t seem to me like it’s 30.4mm either when I have the zoom all the way back. Perhaps someone here can enlighten me on that.

Then also trying to find the sensor size I’m a but confused, because it says “1/3-inch CMOS, approx. 2,070,000 pixels”, so is that supposed to be 0.33 inches, therefore 8.382mm? Since I don’t see any entry in the Blender camera presets, I don’t know for sure if that’s the case.

Anyway, thanks to you all for your replies. Oh, someone here asked me to post the video, so here it is, raw from the camera: http://youtu.be/saz-Wl0kYvs

Oh, one thing that nobody answered yet, is there any sense in trying the other motion models, such as LocScale, Affine or Perspective? And according to Andrew Price, the “previous frame” pattern match gives better results than “keyframe”, do you guys agree with that?

No not really, but I will default to Sebastian as he wrote the camera solvers in Blender. To understand why, you need to delve in and learn how the solver does it’s job. One book that I really like is “Matchmoving: The Invisible Art of Camera Tracking” by Tim Dobbert. It explains the hows and whys to matchmoving and how to get a good match. Although the examples in the book use other software than Blender you can still get a good idea how it works.

The key is that for the solver to work you need to have some forward movement, this is the normal kind of solve. If you stand in place and spin(pan), you can’t get 3d coordinates but you can still track points, this is another kind of solve. You can use both to place 3d geometry into the clip, but pan type movements are more difficult to deal with. Unless Sebastian has programmed in some new magic that is. :slight_smile: The other issue with pan type camera moves is that if it is fast you get motion blur which makes the solve even more difficult.

In the movie studios, if the clip is too difficult for the solver to solve, the match movers have to rotoscope the clip to merge in the 3d elements. This is very tedious and time consuming. That is why I say you might want to re-plan your shoot and make it more tracking friendly.

Just to make a very long explanation very short, 35mm equivalent means that it is the same FOV (field of view) as a 35mm camera would have with a 30.4mm lens… so just set the sensor width to 36mm and then set focal length to 30.4

I would try to track it for you but Youtube video is very poor for tracking because it has been too highly compressed

If you have translation before and/or after the tripod motion, won’t it be treated correctly?

The problem is that the XF100 doesn’t include in the EXIF the focal length information.

My camcorder doesn’t record EXIF for video, but it does record it for still images. So I take a photo at the same focal length I’m shooting video at. See if yours does the same.

Steve S

That is a Sebastian question, I don’t know off hand. All of the tracking I have done with Blender has just been simple linear stuff. And most of the tracking I have done I used Pfhoe. This was before Blender got tracking, and now that Pfhoe is no more, I will use Blender from now on.

I think going through all of this is spending more time and energy worrying about finding out your camera’s specs yourself than you would spend doing a simple matchmove of some footage and letting Blender’s refinement figure it out for you. (This is more of a side note since I don’t think the camera data is your main problem, but it still might be contributing to it)

I have tried refining multiple solves using two different cameras with completely different stats and Blender has always been able to figure out the focal length, optical center, and K1+K2 lens distortion really pretty accurately. That’s how you can tell it’s doing a good job of figuring it out is if you refine multiple different pieces of footage from the same camera and it comes up with the same data.

If you’re having trouble getting your camera’s data right try a shorter, simple matchmove shot of tracking around some easily trackable markers that come pretty close to the edges of the frame and solve that, (NOT a tripod solve) then refine it. Then what you can do is compare what Blender comes up with to the specs that you know already, and if they’re close than you know what to use.

I did this from the start because both of my cameras are both old, low-end consumer cameras so I didn’t have much hope of finding any comprehensive list of specs anywhere like higher end professional cameras will have.

Is the original problem of joining 2 solves, from each side of the corner, still an issue? I wondered if you could attach an empty to each camera object, then constrain a 3rd camera to each, using influence to merge between them?

Thanks for all your replies, I hadn’t received any emails notifying there were more replies so I hadn’t seen them all. I will read them tomorrow, but one thing I was coming to ask is this: what could be causing a solve to keep the camera at origin for the duration of the clip?

I set all the tracks, plenty of them, tracked everything as usual, and pressed solve, it gave me a few errors but just that some tracks couldn’t be solved, but nothing too big. But then the camera doesn’t move, and it is constrained to the solver. I did this before and even if it didn’t give me the perfect track, at least the camera moved. Now it’s still at origin.

Good thought, but my camera, a Canon XF100, doesn’t have stills capability. Perhaps I may try this with my Panasonic AG40, which does have that.

Perhaps that would be an advantage of shooting this with my DSLR, as I would know that it’s 18mm, but then it gets messy too because it’s one of those EF-S lenses and then there is the crop factor in the sensor, so I don’t know that if I enter 18mm in Blender it would be really 18mm as I shot it.

Here’s the weirdest thing about this shot: the only camera tracker that gave me a correct solve so far is the After Effects CS6 bundled camera tracker. It gave me a good solve, except that it’s a basic tracker and you can’t set the origin. Then I downloaded the trials for The Foundry’s Camera Tracker and Imagineer’s Mocha Pro.

Camera Tracker lets me set the origin, as well as a ground plane, all three axes, and origin. But the solve was terrible, the composited object was moving constantly, even if it was more or less in the same area. It’s as if it was shaking a bit.

Mocha Pro was very disappointing especially since they had a sale until yesterday for which I could have bought it for $400, but I tried it many different ways, and I was already familiar with it from using the AE bundled version, but it gave me the worst solve of all, with the camera jumping like crazy. And I tried it more than once, setting planes on different places. It seems to me that its solver works great as long as you have walls and well defined surfaces, but when it comes to vegetation, it can’t handle it.

I really like the way Blender lets me place markers wherever I want, and all the reconstruction features such as setting origin, axes, scale, etc, but it got confused when I turned the camera around. Even more puzzling is that I took my time to place a lot of markers in several places, and make the search area in each bigger than normal to give it plenty of room to track, and I even set the speed to quarter. So it took me several hours to place all these markers correctly. But when I click on solve, it gives me the error I got many times before of not being able to reconstruct some tracks, however, it doesn’t give me red areas in the timeline like I got many times before. But, when I switch to 3D view, the camera doesn’t move, and it has the constraint to the camera solve applied. Even after I click on “Constraint to F-curve”, there’s no curve, all the created keyframes are at origin. So I’m puzzled on what the problem may be. I tried solving with different refine options, still the same. Of course given that the solve error is over 153, obviously it’s not a good solve.

If any of you has time to kill and wants to check out the footage to figure out if it would be possible to track and what would be the best way, I zipped the original file and all the other extra files in the same folder (it’s an MXF type format) and uploaded it here: http://www.mediafire.com/download/sosc6qwb4hciv4m/AA2024.zip
The idea behind this shot is to place a hoverbike on the grass that takes off and then goes down the driveway. Obviously it’s a project to learn, not a movie :slight_smile:

Thanks again

Is it the same as the one you posted on Youtube?

Perhaps you could post your Blender scene itself so we could see what is going on with your tracks?

Sure, I didn’t think of that. Here it is:

Tracking 6-21_.blend (1.02 MB)

The footage is the one I posted the link for above, but here it is again: http://www.mediafire.com/download/sosc6qwb4hciv4m/AA2024.zip

One thing, the blend file doesn’t point to the mxf file that came from the camera, it points to a TIF sequence that I exported from After Effects in 24 fps, while the camera footage is 23.98 fps. Unfortunately the TIF sequence is 2.51 GB and even if I uploaded it, nobody will want to download such a big file.