Rendering motion vectors

Suppose I generate tracking data using the motion tracker. What is the best way to then render the velocity vector of each tracked point in 3d?

I don’t think that the vectors are directly available to render.

Motion vectors of tracked points are pretty useless, at least as a motion vector pass of raster image. What you probably want is motion vectors of geometry as seen through tracked camera. You get them by enabling motion vector render pass.

Actually I’m working with this video. The idea is to overlay the velocity vectors of the detritus and then overlay the corresponding curl vectors. Unfortunately although I can use the motion tracker to generate x,y,z position curves for each point, it’s difficult to get velocity curves and hard to visualize any curve.
For the velocity vectors I just need duplicates of an arrow shaped model with the “tails” held at each of the particle track positions and the “heads” set to the (particle track position)+(particle track velocity). This will create an arrow that points in the direction of travel and grows in magnitude the faster the particle is traveling. Probably deep in python territory at this point though.

You could assign geometry with a particle emitter to each point then turn off particles initial velocity and gravity. But it wont give you predictive direction.

I don’t think that will do what I’m after but maybe the addon Sverchok will. I’ll investigate a little more in the next few days.

“Okay, let’s talk digital computer for a second, here.” :cool:

Strictly speaking, what we refer-to as “a ‘layer’” is actually: “a two-dimensional grid of ‘floating-point n-tuples.’” Such as:

  • (R, G, B, A) … colors …
  • (dX, dY, dZ) … vectors

Some of those things have “a sensible visual representation,” but most of them don’t: “they’re just data.”

The “vector pass,” for instance, represents the “3D motion” of whatever-object the renderer decided was “responsible for” a particular on-screen point. Although you very-much need this information as an input, if (say …) you want to apply “motion blur,” it is quite nonsensical to ask, “what color is it?”

It is data. Nothing more or less. It is data that is associated with a particular (x,y) point on the output raster bitmap.

(And, of course, if “in a burst of creativity” you would like to “equate it with a color,” the Digital Computer is happy to oblige you: just tell it exactly how to do whatever you have in mind.)

If you can get the tracking data as a rna struct then it should be possible to use it in sverchok or maybe the animation nodes addon. With sverchok you could create vertices at tracker pos on frame x and tracker pos on frame x-1 and connect them with an edge. And also construct an arrowhead with point on position of one vertex.

From this clip the most you get is tracker positions in screen coordinates, not moving 3D points. For your application the screen coordinates should be enough as view is quite perpendicular to water surface. You can add meaningful units to them by measuring a known distance from the image and scaling the values appropriately.

sverchok really can, we have two nodes to help - calc normals, vector normal to deal with edges and polygons, calc normals newer one and do better with edges.

than you adding initial points to vectos normals and joining with uvconnect node and than we have skin mesher or tube node or adaptive edges node - for your choise.

amoose136 please import this from gitst to sverchok 18e64e9104dde6e61505

Recently I just used a python script to export all the tracks and then analyzed them in Matlab. Here the curl of each track was independently approximated from the square root of the the product of curvature and velocity. Particle position is in the x and y and curl is in the z. Next step is importing this data into blender to drive the size of arrows whose begin points are at the empties connected to the tracks and endpoints are at the same point plus the curl in the z direction.
http://i.imgur.com/qMas0kt.gif