Viewport FX

Just wondering what stopping us from supporting older opengl code as well as opengl 3+?

Uncle Entity: I wasn’t aware the Intel driver situation was so bad, but that doesn’t really change anything. By the time the project is finished and merged into trunk, OpenGL 3.3 will be positively ancient as well. While it’s nice that Blender sort of kinda works on a netbook, it really isn’t the right hardware for it. Netbooks are designed as secondary or even tertiary computers for mobile web browsing, content consumption, showing presentation slides and light text work, not as primary CG workstations. I don’t think it’s unreasonable to designate ancient or really inappropriate hardware unsupported and tell the affected people they’ll need to get together a few hundred dollars to upgrade within a year or so. Absolutely worst case scenario is they’re stuck with a version of Blender a few version bumps away from now for as long as they need to get the money together and upgrade. Is that really such a horrible price to pay for a reasonably modern foundation for Blender’s future viewport drawing?

+1 viewport rotation locking
+1 pre-selection highlighting
+1 wire color

The main point is they still make laptops with these chipsets.

Don’t much matter to me though, I can always find something else to do with my free time than produce the occasional patch/bug fix…

The “performance” vs “eye-candy” is a false dilemma. I guess I’ve done a bad job of trying to dispel the notion. People seem to believe I’m trying to have my cake and eat it to. The goals of my proposal are the moist cake and the delicious frosting!

There is confusion I think about what version of OpenGL is used and the available features and performance. The vast majority of Blender’s features are covered by non-deprecated features of OpenGL 1.1 (and I did not just make that up, I have a script I’ve written that tells me).

If I were to remove the troublesome deprecated functionality and just rewrite things to avoid other performance problems then any features from newer versions of OpenGL can be optionally enabled with fall-backs to version 1.1 (where the fallback is not too slow and the code not too complex to maintain). I do not think that OpenGL 3 or 4 contain some magical elixir that makes drawing faster because vertex buffer objects are available almost everywhere and that is the pipeline that all versions would be using.

It is worth eventually moving towards a code path that uses only the “core” functionality of OpenGL 3 or 4, and that is the goal of removing the deprecated functionality. The hope is that by asking for an OpenGL 3 or 4 context that you get a driver that has been given more love. However, this code path would be just like any other OpenGL 1.1+ or extension path.

To make a long story short, if we remove deprecated functionality, OpenGL 3 and 4 can be treated just like any other extension. Extensions enable additional performance and features, but not so much performance and so many features that we have to dump older versions entirely.

Whatever they are doing, it would have much more to do with a carefully managed memory hierarchy and level of detail than OpenGL. To achieve those kinds of numbers means that you have done a lot of work to make sure you are giving OpenGL a much smaller number of triangles than 40 billion.

This is good news, while I don’t use older hardware really, I do enjoy occasionally blending on a netbook :slight_smile:

What you are doing sounds really great, I hope your efforts makes it into a Blender release not too far from now :slight_smile:

jason thanks for your reply, you make an excellent job in the sculpt part of blender i know you’re going to surprise us

The vast majority of Blender’s features are covered by non-deprecated features of OpenGL 1.1

I don’t quite understand how you arrived at this conclusion. It’s true that the major bottleneck is the use of immediate-mode in place of VBOs (which may be available from a 1.1 context if the relevant extension is supported) but as far as the OpenGL 3 spec is concerned, all of the fixed-function transform&lighting pipeline is deprecated in favour of shaders, too - for that reason it is not possible to remove deprecated functionality without breaking compatibility with older/crappier hardware. The major point of the OpenGL 3 spec was to define this deprecation, not add features. And while it is trivial to add support for VBOs, maintaining two different lighting pipelines is a bit more involved, and that’s exactly the situation we have right now.

I may I have misspoke slighting when I said “remove” all deprecated functionality. In many cases the word would be duplicate. For a context that has no GLSL there would have to be some way to transform and light a scene. To support my argument, the presence of that legacy code in Blender does not effect the performance of contexts that use GLSL.

That is a question of if we want to support multiple code paths. In the grand scheme of things having two transform and lighting paths is really not that difficult. We would need to somewhat duplicate some subset (may be complete) of OpenGL’s lighting already. If we did this, by say, creating a function called vfxLightfv, then the legacy OpenGL implementation is trivial, it calls glLightfv.

The removal of glLightfv from the spec is not because it has bad performance! It is because it was decided that carrying around a single old way or doing lighting was extra cruft. I find it to be a terrible thing because nobody seems to have published a GLSL shader that perfectly duplicates it. It was not removed because it was somehow hard to do GL lighting fast. If we like the way OpenGL did lighting before then we will have implement it ourselves. There really is not any reason not to just re-implement OpenGL’s fixed function pipeline when doing Phong style lighting, especially if you want to provide a fixed function fallback. A creative person could even extend it so that it has additional power used inside of GLSL.

Most of what was done with OpenGL 3 deprecation had to do with aesthetics and the comfort of driver writers. They basically just said “You can implement all this stuff yourself now, so get to it!” I’ve been thinking about writing an OpenGL 4 library that provides a lot of the old functionality built on top of the new core.

In summary, we need to duplicate more or all of OpenGL’s fixed function lighting and transformation pipeline anyway, short of redesigning how Blender works, so why not make it as compatible as possible with the deprecated functions we have to keep if we want to provide backwards compatibility? It has no effect effect on performance (it is just state management) and there would be no need to design a new API.

A concrete example of what I am talking about in my previous post.

OpenGL 1.0 specified 3 matrix stacks and an api for managing them. These functions are now deprecated. There are a couple of options to deal with this when writing new code.

One is to slavishly copy the old functionality and hook it into the new core API by defining three uniform matrices and some functions to manage them exactly the same way. This would be a great option if one wanted to do add many lines like “#define glMultMatrixf myMultMatrixf” to their header files and have it work in OpenGL 4. This would probably perform exactly the same as it would in older OpenGL.

If you do not have legacy code, or you are free to rewrite, then you will recognize that the OpenGL 1.1 interface is really crufty. You will find a library online or roll your own which lets you have as many matrix stacks as you like as well as additional functionality like a separate camera matrix and matrix inversion. You might even make it source compatible like the first example by making the API similar enough to do a search and replace.

So much of the deprecated API functions are solely for managing state. By getting rid of the fixed function pipeline a lot of this state became obsolete. But what was defined by that state is still stuff you need to know in order to render a scene. The main thing they did was stop telling you exactly want you needed to specify and just let you do it your own way. It was not because the old way was particularly slow, its just some matrix transforms and Phong lighting after all. The simplest shader almost always has to do those things and it will need about the same state in order to know how.

So it is correct that OpenGL 1.1 without deprecated functionality has no way to say where a light is or how to transform the scene, but technically neither does OpenGL 4 :slight_smile:

Thanks for clearing that up. My concern was primarily about the developers having to support and debug separate code paths. In my limited experience that always causes headaches when things don’t work quite as expected. You say it’s not so hard, but it’s still trading effort in exchange for being able to run Blender on hardware that is inappropriate anyway. Forgetting about compatibility with the legacy stuff and concentrating to make the new stuff the best it can be would probably be easier and yield fewer bugs and a better codebase. But in the end it’s your decision, so just ignore my rambling :slight_smile:

Another question that would have to be asked is it worth the up front cost to design an API so different from the original OpenGL API and then port all the old code. I happen to think that the old API is a good starting point. In my view the main problem from a software engineering standpoint is that, although graphics APIs tend to use lots of global state, there is no reason why a library build on top of them cannot be more object oriented.

I also do not get to decide these things. The fixed function pipeline is not going anywhere this summer, but I’m going to make sure that it has started to pack its bags :slight_smile:

Hi Jason,

This all sounds very cool, personally for me i put things like pre selection highlighting, gradient background, wire colours more in the eye candy category.

Whereas i feel like if i was able to see things like perhaps procedural textures in the viewport, blends and cloud textures for example in a sort of approximation of a ‘high quality shaded’ viewport mode.

This would reduce my work time massively as you point out not having to hit render time and time again.

All of the above would be nice though :wink:

Thanks
Aidy.

If I had not had to trudge through the texturing code in the past I would probably foolishly say that procedural textures in the viewport might not be that hard, but I know better :slight_smile:

“Procedural Textures in Viewport” is its own Google Summer of Code proposal waiting to happen (probably in combination with a bit of re-factor of Sculpt/Paint textures and general caching system for textures).

A real trick would be to figure out how to pull it off without creating two (three?) separate but very complex code bases that have to behave reasonably the same. My head is already starting to hurt.

I could have misjudged the difficulty of that potential project, but that is my first impression.

EDIT: Now that you got me thinking about it, I’m imagining how this may be much easier once we have PTEX integration. But I’m not sure I even want to put that on the back burner. If I do SoC next year I definitely want to return to Paint/Sculpt.

Unified texture workflow with PTex support for viewport rendering and painting-sculpting would be great!

The thing that at the moment annoys me about textures in viewport is the fact that I have to manually choose a texture in UV/image editor for it to appear in viewport. It would be much more logical if viewport showed the (first) diffuse texture from assigned material (should be possible with simple change in code?). Diffuse + alpha would be even better but probably it is not that easy with current viewport code…

Anyway, good luck with viewport fx project!

well I hope, for my sake, that you do…lol, and thank you for your efforts in general. and ty google for sponsoring.

I can see a bit clearer now where you intend to go with this. I don’t believe the complication lies in recreating the old API, but dealing with the limitations of the old one - e.g. fixed-function only supports a limited number of lights and is inherently less flexible than the shader architecture. Sometimes you’re dealing with eye-candy but sometimes it is relevant to functionality. One thing that comes to mind, right now a huge bottleneck is the drawing of outlines for selected objects. It could instead be done very cheaply in a post-process, but that requires shader support and that arguably is a completely different way of going about things. Anyway, you must’ve given this more thought than I have, so I wish you best of luck with your work.

http://dl.dropbox.com/u/1482587/ViewportFX.png

To push Blender to the next level, you should seriously consider adding viewport lense flare. The current look is so 90s and detracts many users from trying out Blender, simply because the lack of that extra bit of polish :slight_smile: Here’s hoping for the best!

P.s.: On a serious note, three things I’d personally like to see is improved viewport performance, wireframe view, where only the front-facing wires are shown:
http://dl.dropbox.com/u/1482587/Occluded%20wireframe.png

And the option to have BGE-like post-process filters in the 3d viewport without the BGE running. Of course, “merely” improving the performance will be cool enough on its own. Good luck with the project :slight_smile:

Don’t even tempt me with the lens flare dude, because as soon as I can do it, I will do it.