Functionality expansi: Custom motion vector export capability - willing to pay/donate

This is being posted in Discussion because it is a discussion about further development of a feature already implemented in Blender, if I’ve posted incorrectly, please accept my apologies, and feel free to move.

I’ve been working on an animation that I’d like to use Octane to render, however Octane lacks the motion blur capability for the animations. I intend to use RSMB from RE:Vision to create the motion blur in AE. RSMB uses a different type of Motion Vector than Blender puts out, and it makes it hard for me to integrate it into my production pipeline.

I am currently putting together a proposal for a broadcast motion graphics package for a new TV show and I would like to see if someone can create this for Blender or if it’s able to be done with render nodes somehow that we could get a project file made for. I’d be willing to pay a coder, or if preferred, donate to support further blender development.

Below are some links to their pages where the RSMB info used is written out. Also, there is a quote from the RSMB guy on what they think they are seeing from the blender output, and some of what they are needing to receive for their plug-in to generate the blur in their AfterEffects plug-in.

“This is definitely not XY raster space motion as RSMB expects.
I cannot find the explanation anymore about what these channels are. I see some vague reference about orientation and speed but looking at your example file, that does not look like that.
I cannot quite tell which of the 3 channels is speed and which is orientation as if you look at it, they all have large areas of black in them, perhaps it’s being clipped by the 8 bit image. If you can render the vec image in floating point and the color image with an alpha channel, and see in an app like AE in a 32b float project that the 0 are under 0.0… if 2 of them are speed x and y, there might be a way, else feel free to play with SmoothKit Directional Per Pixel with Dir Source Interp to Orientation or Direction (if spread over two channels) and Len Mode would be like Speed. But again that you have full on white or black large areas is an indication that it’s not motion that is encoded here.”

And finally, this is what Blender outputs on the Vector pass based on a response to a thread in the forums here…

“The image (RGBA) is based on screen space pixel movement of vertices… R being the x direction from current frame to previous frame, G being y direction from current frame to next frame, B being x direction from current frame to next frame, A being y direction from current frame to next frame.”

I really like blender, but it always seems that whenever I try to implement it into my production pipeline in any way but the most simplistic, it’s more of an island unto itself than it is a “team player”. I know that the mango project is working to get it more viable as a production pipeline tool, and that makes me very happy, maybe someone has already done this and there’s a tool out there somewhere, but either way, would love to hear back sooner rather than later from anyone who may have thoughts on this or be interested in doing it.

First up if your rendering in octane, there is no way of changing the sensor size… which may mean that the images may not align perfectly between blender and octane

If you can upload some example RSMB images (preferably in exr format) i’ll take a go at making a node network which will convert it to what you are wanting.

From what i read, what is needed is adding the R and B channel and inputting that into the R channel, and the G and A channel into the G channel… At the moment i am not sure if RSMB accepts negative number inputs and not sure if its Add or subtract.

I’ll help you out, but i dont want payment… donate to blender :wink: i dont think any major programming is needing to be done as we can set up a simple compositing node network to get our output.

reading this it seems pretty simple – http://www.revisionfx.com/support/faqs/motion_vector_FAQs/motion_vector_math/

I think this is correct… still test it to see if its working fine but i think this is what is needed. I have uploaded the blend file aswell. – http://www.pasteall.org/blend/16386

Seems like there should be large amounts of black where there is no motion… thats ok, at the end of the node network create a color mix node, type should be multiply… factor should be one, first input should be the output of the combine RGB node, second input should be the alpha output of the renderlayers node… the outputs should be going to the viewernode / composite node.

Doublebishop, you’re literally an answer to my midnight prayers, thanks!

I think I should actually have a clear alpha channel on my image, so I’ve just brought the original alpha connector directly over, I’ll let you know how it goes, but this may well get submitted to the RSMB website as well, so that they can add it to their site. It would be a little tidier if someone was able to write a tiny add-on that RSMB could just have sitting there available to download that would automatically do all this for someone, but even still, I have limited experience in the Blender nodes, and with that diagram/project file I was able to make it work just fine. Since I’ll be putting out a Z-depth pass as well, I don’t think that the sensor size issue will be an issue going between blender and Octane, since I’ll make sure that there’s no DOF on anywhere, and that’s all that the sensor size should affect so long as I have the correct dimensions.

My primary issue now is that I’m getting some sort of problem where the colors are showing up and changing on the flag pole, even though the only thing moving is the flag.

any ideas?

Ill have another crack at it tonight… i have a feeling my node network is incorrect…

I’m pretty sure the issue is being caused by the normalize function, I’m going to see how it looks when I have that disabled

heh, that didn’t quite fix it either, though it did keep the flag pole from having any color on it.

I think we need to use a subtract node instead of a add node… ill take a look at the math in a bit

Try this… All animated objects will have to be isolated somehow… so what i have enabled is the ‘object index’ pass, which you can set in the object menu. –http://www.pasteall.org/blend/16388

well, this should work, except for whatever reason, the normalize function isn’t doing what we’re wanting it to do. We want a normalize to keep the RGB values of the image to 127 in the 0-255 spectrum ( not sure how well I can output 16bpc from blender ) But if you hook up the viewer to the normalize boxes coming off the too mixers, you’ll see that the whole area of the BG where there is no motion is not the same level for either one (those should both be 127 on every frame throughout), and if you hook it up to each of the RGBA points on the initial separator, with normalizers on them, you’ll see that they all are a little different in the area that is a 0 movement area. Not sure how to fix it really, but this is about the first time I’ve really dug into the nodes at all.

I also have octane. The thing I like about it is that I have a valid license which means I have the option to use it commercially. I know that the octna render team is planning on adding motion blur in the future as well as many other thins like instancing although there is already a beta for that. Brining it into after effects would mean that I can’t use it commercially because I do not have a commercial license even less the plug in. I think you should find a way to do it in blender because A. The bridge is smaller meaning that your are not using 20 different pieces of software for ideally should be able to accomplish in just blender + Octane.

Larmannjan, While I appreciate the desire to do it all, or as much as possible, in blender, and I’m more than open to something that would be quickly accomplished like that, I am trying to make blender fit my production pipeline already in place, rather than having to change my pipeline to fit blender. That’s what I was talking about when I said that every time I try to use blender in my production pipeline in anything but the simplest way, it seems like it’s an island, rather than a team player. If there’s an easy to way to accomplish this push through Octane, and back through blender to give the blur, that’s great, but honestly, I’m going to be putting it in After Effects regardless because that’s in my production pipeline, and there’s no getting around it.

What format are you saving this images out as? i suggest openEXR

Ya, EXR or PNG, combining using Add, Subtract, or Mix, using the color node or the math node, I always get shifting of the median color.

Near Start:
http://www.day-vidsproductions.com/Vector-pass-RSMB_With_Static_Box-near-start.png
Near End:
http://www.day-vidsproductions.com/Vector-pass-RSMB_With_Static_Box-near-end.png

The Models:
http://www.day-vidsproductions.com/Image-content.PNG

Do you think this is an error in Blender or an issue with the way we’re trying to accomplish the mix of the movement channel information to the color channels?

Are you able to throw this blend file up on the internet somewhere? doesnt need to have textures or anything… ill check it out tongiht.

I’ve got the Blend here:
(going to need to save link as)
http://www.dayvids.com/flag2-RSMB-share.blend

Just updated it ill explain what its doing now – http://www.pasteall.org/blend/16404
What its doing is checking to see if that pixel is moving, we need to do it on a per channel basis… couldnt figure out a way of doing it over the entire image with alpha channel… Anyway the math behind it is that the power of two means that the result is going to be positive (-1 * -1 = 1… -2 * -2 = 2… the actual result doesnt matter, we are just wanting a positive result)… because that result is then piped into the greater then 0… this returns a 1 or a 0… add them all together and then clamp it off again… then use that to drive a colour mix node between black and white.

This also removes the need for a index object pass.

Let me know if this works!

Carlo

EDIT: The reason why the colour was changing is because of the normalization… Are you able to render out like a second worth of footage through the pipeline to see if it is working? i dont have RSMB so i cant check.

We’re very close, I’m getting a little strange look in the direction of my blur, but it’s getting so close, it might not make a difference if I don’t make it blur too much. Also, do you think it’s possible that we might be seeing some sort of mapping issues? Does the fact that blender view Z as up, rather than Y make any difference?

Black output however results in blur information, it’s looking for -1 to +1 information, and it’s 0 point would 127 given an 8bpc image, so there’s still a bit of trouble with what we’re doing. I’m going to see what I can come up with