Future render mode request :texture mode with alpha mode and no light no shadows etc.

For video editing it would be nice if rendering could be “down graded” to just Mesh + alpha texures in BI

So no advanced options at all, no need for lights, no need for shadows.
Titles and a background for a lot of movies are just 2D, so if the advanced render options could be disabled.
The rendering could be done faster
Then blender could display a backgound in a milisecond (like it does in texture edit mode)
And then next milisecond add animated text.
Having some control of alpha on mesh textures (by png files) or by a whole texture without alpha channel.
Would allow for 2D simple movies, and 2D titles
Given that blender allready has a lot of animation options, it would be a nice future i think.
Near realtime editing would becomme possible for 2D animation on low end PC’s.

I know BI is kindoff end of development, but its the fastest shader we have for such things.

I dont know where to fill in requests, if there is an offical forum/Email please let me know.

Maybe I misunderstood this post but that’s something actually doable with the sequencer.

Suzanne is just a PNG image with alpha.

so basically remove all the 3D integration other than shadeless planes? thats the most stupid idea in the history of feature requests.
besides, you can do everything you just asked for in the compositor any way no need even for planes.

Im sympathetic to this request as a long in the tooth motion graphics director, what take is suggesting is something that I’ve been doing by turning off as much as possible in the renderer for years. In fact I just did visuals for Nitin Sawnhey in the Royal Albert Hall with shadeless planes and to rebuff your suggestion small troll, I wouldn’t be able to effectively animate so well to music using compositor features. I suggest that in future you try not to say anything unless it’s positive or you have something useful to add.

Razor, I’d support something along these lines with you, maybe we can cook up a plan and present it to the community…

Moved from “Latest News” to “Blender and CG Discussions”

Though this sounds a bit like a support request. How does this differ from the scene proxies feature that already exists?

The main difference is speed by disabling non wanted options, its so simple in c++ to show an image with shadows.
Just for a second realize that blender has great stuff that works in in 2D animations just as well…
Although it always was focused on 3D, the tools do aply to 2D just as well. And yes there is openGL but its slow, real speed of what is possible here is already visable in blender texture mode. But Blender cant use that as a final render mode, and in that mode it cant manage alpha’s that well.

If it could we would have a 2D animation app that would be better then vega’s (blender is about to brake vega) but also shockwave flash.
Blender allready understands mp4, H264 these days quite popular to camera’s. Combined it allows for creating titles text and 2D animations. …Its allready possible, but with a “huge” Cpu “waste” on rendering futures not required for such stuff.

Maybe you think i’m doing a strange request, position yourself into the role as movie editor; you got to do titles, and text, and clarifying images in your new debut movie. its possible… but its quite complex to do so in blender. just compare it to windows movie maker or any other free movie editors. As for now i try to stick to blender (as sony vegas couldnt handle the load) but i notice where blender has its troubles for such “simple” tasks; which are part of everyday video editing work. which isnt far aways from general 3d work.

What makes OpenGL render “slow” is not the extra features, but writing frames to disk. Even Blender Internal works basically in real time if you use it in “Rendered” viewport mode, especially if you don’t use any raytraced features.

You can already do all of these things in Blender, as fast as you need to. If you need a little more convenience, write a script for it (or have it written for you). You can’t expect Blender devs to create an extra mode for this just so “Joe Director” doesn’t have to learn Blender. Blender is a general tool, if you want a specialized tool, look for something else.

You underestimate what it takes to develop something. You may believe this is “easy to do in C++”, but guess what, just firing up the stupid C++ compiler and re-compiling a single change just takes forever. Integrating an entire specialized mode into Blender for something that is already possible is not going to happen.

The Blender Foundation generally does not take feature requests. You may do a feature proposal, but like 99,9% of all feature proposals, it will simply be ignored, because nobody cares.

No its not the case, i have one of the fastest SSD’s available, it is rendering where blender takes its time.
And well basically it doesnt need to render for 2D animations.
Its not much new code required for this, promoting texture mode as a render mode.

People who care about making 2D animations would care…

why so negative ??

Your “performance” assumptions are simply wrong.

The fact that you have an SSD is completely irrelevant, it’s still magnitudes slower to actually write to than system RAM. For your use-case (a few transparent planes), it’s also completely irrelevant how complex your shading is. To convince yourself, just try rendering 100 completely empty frames, then render 100 frames of whatever you wanted to render. The difference between those two times is how long the actual rendering took, the rest is the overhead from writing to disk.

Ironically, stuff like proper alpha-blending is likely much more expensive than the 3D transforms or shading you’d wish to disable. Still, for your use-case the overhead is completely irrelevant. Proper alpha blending has been a request for quite a while. I can see a use for that, but I don’t think you need it. BI can do everything you want, fast enough.

If you believe people who do “2D animations” care about that, talk to them about hiring someone to do it. Otherwise you’re competing witheveryone else here about developer attention, which I believe you don’t deserve.

Addendum:
On practically all modern computers, “true” 2D rendering is slower. That’s because all GPUs implement hardware-acceleration for 3D graphics, not 2D. You get “2D” for free by just rendering 3D planes.

@razor,
i get where you are coming from. BI wastes a lot of time and memory doing useless computations on relatively simple scenes. one of the main objectives of the BEER project is to tackle exactly that. it does exactly what you need; no more, no less.

@beerbaron,
openGL does work with 2d too, but i’m not sure there’s any significant performance difference.

Modern OpenGL generally does not do anything in only 2D, unless you’re talking about blit operations on framebuffers. In hardware, this is likely implemented as drawing layered 3D planes, because that’s what a GPU is designed to do fast, already. There’s nothing to optimize here.

About the claim that BI “wastes a lot of time doing useless computations on relatively simple scenes”: What exactly is your source for that?

(Edited: Technically if you disable the Z-Buffer and only pass two-dimensional coordinates then that’s “2D rendering”, but don’t expect performance gains from that. You’d now have to draw all your layers sorted, not just the transparent ones)

Hey beer, thanks for the alpha nerd routine, really good for alienating people from the topic… users are users, they are people who use tools to get a job done and sometimes they like to chose how those tools work. It sounds from what you are saying that you are opposed to what is being discussed here because you believe that your needs should be served before others. I am a long time supporter of the foundation and have helped many coders realise useful features in the last ten years and feeln that it is important that my voice is heard as loudly as the technocrats.

Apparently it takes an Alpha Nerd to explain to you guys that your performance considerations are completely pointless. No need to thank me for that, I actually like to do that, as futile as it may be!

If you believe that developer effort should be spent on making things easier for non-Blender users, to do things that are already possible, then you can say so. It’s not that my personal needs should be served first, I’m not the one asking for features. If you’re asking for a feature to be implemented, you better have some good arguments for it. You should be able to defend it against the 99% of people who don’t care about it, but who might benefit if that development time is spent elsewhere.

Really all that talk is kind of pointless though, because completely vague proposal like this never get anywhere, anyway. That includes you BEER people.

So you have taken precious time from your day to try and shut down our pointless discussion?

Cheers…

@beerbaron, yeah i just had a quick look. yep, it appears opengl fakes its 2d.

as for the BI thing, i can’t think of a really good example right now… but here goes

  1. when mixing materials with nodes, BI renders each one individually even including channels that don’t contribute to the final result
  2. even with ambient occlusion and buffer shadows turned off, i still get noticeable occlusion shadows in my scenes. check out the image attached (the region around the axilla)
  3. if you’ve got geometry with transparency enabled but alpha set to a value of 1, BI actually renders even the objects behind

granted, those are little issues that one could overcome with some effort… but from the perspective of a toon artist, it gets frustrating and unproductive to have to tweak a hundred dials on each material and light you add to the scene. or having to composite out the unwanted effects that blender keeps adding to your render.

there was a point in time where i started considering rendering my scenes in unity and even BGE, just to avoid the tiresome work involved in setting up toon scenes.

In my view, the changes needed to help toon artists will benefit everyone. material presets, world presets, light presets, etc.
since the framework for presets already exists within blender, it’ll probably take less than an hour for a single dev to make those changes

this will do at least until BEER comes along. the level of control and performance it promises is just unbelievable.

Attachments


I’m not trying to shut down any discussion. What is pointless is debating whether or not these features should get implemented, because I can guarantee with high confidence that they won’t.

Otherwise, feel free to write more bullshit about performance and C++ development, I might chime in. And if you feel such features would be useful, even if it wouldn’t make things render faster, then say so. I won’t argue with that. Though what you really want to do is complain about me being unwelcoming and “technocratic”. You yourself don’t care about these features either.

  1. If accurate, this is most likely not a performance concern. Have you ever tested the impact of this, by comparing a simple node setup to a more complex one?
  2. BI lets you turn off all occlusion features. If you’re still seeing occlusion, you’re either seeing something that isn’t occlusion, or you failed to turn it off.
  3. If you use raytracing, that is not the case. If you use rasterization, culling the objects by occlusion might bring a benefit, but calculating the occlusion information has a cost in itself. Generally, if you have a really complex scene, use raytracing, it will be faster than rasterization. And if you don’t have a really complex scene then occlusion culling probably won’t make a big difference either. That’s the case for a software renderer, on the GPU (with hardware rasterization) occlusion culling makes more sense, but it’s not always a must-have either.

I’m not sure I understand where you’re coming from here. Have you tried using scene proxies in the VSE?

Also, Blender’s textured viewport is OpenGL (as is everything that Blender draws to the screen… even its buttons). There’s a good argument for having better alpha channel support in textured view, but writing those frames to disk is going to be your bottleneck. It takes more time to write full-screen frames to disk than it does to write them to a framebuffer (even on an SSD… the only possible exception being devices where solid state memory is strapped to the PCI Express bus, such as those made by FusionIO [now owned by SanDisk]).

@beerbaron,

  1. nothing conclusive. but i noticed a significant difference in memory usage
  2. i wouldn’t be surprised if you’re actually right. blender has some well hidden settings
  3. BI uses rasterization, right? there’s a huge difference especially if the occluded geometry has reflections and such

i’m not particularly bothered by the performance. i’m more than satisfied with my render times (usually about a minute). these little things however could pile up on certain scenes and cause some serious sluggishness. when aiming for photorealism, that’s usually not a bother, but sometimes you just want speed rather than accuracy. fake sss, fake ao, etc. that’s when you start to pay attention to all those little things. like when you suddenly realize that blender is rendering your character’s teeth despite his mouth being closed.

in unity/BGE for example, one of my scenes renders at around 30fps with all sorts of post processing effects. the same scene takes around a minute to render in BI with almost the same results. even with all the possible factors i can think of, i still can’t explain why it takes that long. that tells me that blender is doing something i don’t even care about. and i’m sure writing to disk doesn’t take a minute

now if the opengl renderer was a bit more polished, it would be perfect for most of my needs, but it lacks support for even the most basic stuff such as anti-aliasing and environment lighting. it’s so primitive that it even shows your selection outline in the render!

One option that has been raised a few times would be the ability to use opengl renders in the compositor, I’ve just had a try and the opengl render includes alpha, does anybody know a way to allow opengl I be used as an input?