Blender and integration of its features (or lack there of)

Caution, wall of text ahoy.

Blender is great. Its goals seem to be set high, with its feature set encompassing most of cg/video functions usually found in separate commercial applications.

In Blender you can:

  1. Create/Edit and animate 3D content
  2. Paint and sculpt in 3D
  3. Composite
  4. Video edit
  5. Track cameras/Plane track
  6. Make games

So we are already aware (or should be), that Blender can’t be the best in all of those fields if development resources don’t cover the refinement needed. It is my opinion that with small tweaks Blender can however be a application where a project can be done from start to finish with little or none need for other 2nd party applications.

You may ask, why would one want or need that? Well as you will see there are times when an import/export process may need so much computer time and artists supervision that an integrated application is something one can really appreciate.

So where does Blender fall short? I will try to explain through real world commercial projects I’ve done in Blender and where I got stuck. Feel free to correct me if there are any inaccuracies regarding Blender’s capabilities.

Case 1. Closing titles for a film project

The task
I’ve been tasked with creating some animated titles for a movie that is just been released in festivals and so I can’t really show any images/video since its not widely released yet.

The idea was that its suppose to feature photographs of main actors on a dirty background with names floating in 3D above them. Camera is supposed to emulate handheld feeling with quick jumps between different photographs.The duration was about 3.5 minutes at 2k/24p. So I went to After Effects for this as it seemed as a fit for such a project.

It was fast and easy setup just until I needed to animated the camera and then the crash fest/slowness started. After effects isn’t good with high res images. Especially if motion blur is involved. And this was 16k background with photographs at 4-5k pixels each.

Render times were about 5-7 hours for the 3.5 mins at 2k. There was also the finicky graph editor for the camera animation which gave me some grey hairs in the past so I tried to use mostly expressions but it soon become clear that it needed some tweaking by hand and the deadline was approaching.

So, I’ve fired up Blender. First it was for kicks only, just to see if it would crash or what with all the images loaded, and also see how it would get the vector art (titles) in. An hour later I had the sequence completed and playing realtime in Blender. A week later, the project was finished and rendered out of After effects and edited in Premiere for final deliverable.

Wait, what?

Why go back to After Effects you say? Go back with about 5500 2k frames X 3 layers (bg, photos, titles) that need to be rendered out of Blender and then again from After Effects and then again as Prores4444 from Premiere…

Well…
http://www.blenderartists.org/forum/showthread.php?341612-Big-thanks-to-Bartek-Skorupa-who-made-the-AFX-exporter!

So what happened? I needed one small effect that needed Blender compositor and then map the output back to a distorted plane (the photographs). I needed to highlight faces on the photographs through a mask and blur the rest of the photo when the camera was over them. And I needed that to work in 2d. And then map the output back. But you can’t do that, there is no texture output from the compositor.

Only way is to render out the sequence in whole and map back. I’m not doing that, not with 5500 5k frames (remember the photos were 4-5k). I don’t need that, that is not the final result. And then there is the issue of sync and speed tweaks. I need all that live and in context to be able to judge the final result.

So I did what hurt the least. Rendered all out without the effect and then imported in After effects and thanks to great After effects exporter by Bartek Skorupa I got the Blender camera and location nulls (Empties, whatever) back to After Effects and parented the layer with effects to them and finished the title sequence that way. Now that After Effects had to deal with only 2k it worked well.

So how much time was wasted by Blender compositor not being able to render an intermediate result back to a texture without going to disk first? About 3-4 days.

And its silly, because the compositor has the result already in the memory, it updates on frame/value change. Its a freaking bitmap dammit. Get the pointer to the result and assign it to image texture same as a disk based image, jeez.

Its not only for this limited use case I had, think of things you can create and animate in the compositor and then think how could it be used as a texture. Sculpting, texture painting stencils anyone? All procedural textures using masks and all other compositor features with or without (with adaptive rasterizer-see svg project) finite resolution. Someone was talking rigging with displacement (hey I did this 10 years ago with XSI compositor), control wrinkles from compositor with drivers. Motion graphics, hell yeah? Add baked input layers from the render like curvature/ao and you essentially have an integrated Substance designer. BTW, thats also something I did in XSI 10 years ago, rendermap the procedurals/ao on data change events so I could use them in other materials/effects. Here is the proof:

Anyways, this is part 1. of Blender and integration of features, this is my take on how to improve Blender with small steps, bringing big results.

I’ve been ranting about allowing the compositor to do textures for years. Nobody seems to care or understand the potential, or how little it would take.

Good luck getting the devs interested though.

I think the problem is that there are already enough issues with the compositor, like the missing cache, missing playback, missing canvas, missing transform widgets, etc.
Also, what you describe sounds a lot like it would lead to cyclic dependencies and stuff like that. Maybe the ongoing work on the dependency graph might solve some of them though…

None of those missing things are really necessary for this to work. Just allow multiple graphs per scene, add an internal image output and run the compositor before render…

Yes, and its somehow solved in Softimage/Houdini with their integrated compositors. Magic, I tell ya!

Just joking, its basic evaluation logic, nothing complex or out of ordinary.

BTW. All this is kinda possible with Blender as is. With script rendering the compositing on RenderStart event and reloading the file texture, I’ve been half way there on the aforementioned project. Lack of free time killed that experiment though.

Yeah, not saying it’s impossible. I don’t know enough about the code anyway… :slight_smile:
But you’re right, might be totally scriptable in a hackish way. Maybe be by using 2 scenes in one file, where you just render first scene1, that writes to texture and then scene2.

Yeah it annoys me too. Allowing compositing or alpha manipulation to planes would be amazing, almost like deep comp. I have been hacking the other way, taking UV mapping co-ordinates from 3D space into compositing to manipulate instead.

Yes, it seems that would be the problem. I’ve eventually come to some other limitations or bugs that seem easy to solve from user POV, and you find its a limitation of the current dependency graph, and the bug or feature get archived. But depsgraph is top priority now, and compositor updates (like canvas awareness) are in the roadmap, so we will surely see improvements in a lot of areas.

It’s easy to get frustrated when your software can’t do what you want, but in these cases I always do the same. If you take some distance, and see how other FOSS projects are doing (gimp, inkscape), with very slow development, you realize the great work devs are doing. And most important, how well BF is driving the project. Blender is becoming one of the fastest growing projects in FOSS in graphics area.

So patience, we will get there.

Seriously? We’re talking about concrete features here. Touchy-feely things like distance and other projects are irrelevant.

Hi,

There were talks in gooseberry project about that.

The problem is, essentially, dependency graph (again!). You need a way to register and update users of an image when a change on the image happens. This already happens for some uses, such as materials updating when an image changes, but to allow complex chains of dependencies in a compositor system, we would to do full dependency graph calculations to avoid cyclic dependencies and ensure proper order of execution. We want to fix the dependency graph of course, but it’s a big project.

Well I was hoping for the recent dependency graph refactor to take into account such data relations, so its limited refactor right, or just still unfinished?

Hopefully it takes care of click in-out and temporal change at least cause thats been killing my other semi recent project in Blender which I had to transfer to Softimage to finish.

http://www.blenderartists.org/forum/showthread.php?334116-Depsgraph-refactor