Any News about Viewport FX II ?

I don’t have a real opinion on how good the Viewport FX approach is, since I haven’t looked too closely at it yet.
However:

This would mean, that I (and many other developes) wouldn’t be able to use blender anymore :wink:

Also, the GL code needs be rewritten with or without the extra layer. The extra layer will just ensure compatibility.

The thing is, I’ve heard several times that something needs to be rewritten, yet those who say it don’t step up.
It’s a pitty, since some of them actually seem to have some knoweldge in the area.

http://www.blender.org/blenderorg/blender-foundation/development-fund/

Well, I am a poor student, still I can use GL 3.3 and am planning to get a get a better GPU (if just to play with compute shaders).
That won’t change the fact that my laptop (which I do a fair bit of development on) only supports 3.1 atm.

Another thing, and that might sound strange to some, is that I belive that developers should use slightly older hardware most of the time to force them to write more efficient code (I even hear there are companies that do this on purpose).

Leave the immediate-mode stuff as it is and optimize what needs to be optimized. The immediate mode stuff isn’t a performance-problem, as long as it isn’t used for heavy mesh drawing.
However, viewport quality (and to some degree performance) could be improved by just starting to require more OpenGL features. If you hide them behind another abstraction layer, whenever you want to use a new feature, you need to either:

  • Implement the feature in the abstraction layer (possibly with a fallback) and then again on top of the abstraction layer (extra work)
    or
  • sidestep the abstraction layer and use OpenGL directly (inelegant)

If you don’t really need the abstraction layer (which is my point, because Blender shouldn’t support other backends than OpenGL compatibility profile) then you shouldn’t have it.

This won’t work.
If you want to use any features that require OpenGL > 3.0 you can’t use the compatibility profile since some vendors like Apple and Intel don’t support it.

Another thing, and that might sound strange to some, is that I belive that developers should use slightly older hardware most of the time to force them to write more efficient code (I even hear there are companies that do this on purpose).

I’ve heard it before, but it doesn’t make sense to me. I’ve heard the same about artists. You’re supposed to give them deliberately outdated hardware so they create more efficient scenes.
Anyway, there’s a difference between slower hardware and hardware that is incapable of supporting certain operations.

If you want to use any features that require OpenGL > 3.0 you can’t use the compatibility profile since some vendors like Apple and Intel don’t support it.

That’s not quite accurate, >3.0 extensions are available from older legacy contexts.
However, as I’ve said, I wouldn’t support any current version of Mac OS. Their OpenGL support is pathetic. As for Intel GPUs, I wouldn’t support those, either.

In the case of what Sergof is doing, I don’t think the hardware he has right now is preventing him from advancing the physics engine in a way that can utilize modern hardware (ie. even some low-end machines these days have some kind of multi-core processing), it’s not like with Cycles where you would need to have a late generation GPU to make sure that the implementation you just did using the latest instructions available as part of the CUDA API is working (unless of course you develop a CPU-only feature and Brecht edits it for GPU capability).

Anyway, I wouldn’t call out any developer for using what appears to be outdated hardware, all that I would make sure about is that his hardware is the right type for bringing in a good implementation that’s optimized, stable, and works as advertised (which is precisely what he is doing).

Of yourse you should work on recent hardware if you want to exploit some new feature.
That comment wasn’t meant to be some general guideline :slight_smile:

That’s not quite accurate, >3.0 extensions are available from older legacy contexts.
However, as I’ve said, I wouldn’t support any current version of Mac OS. Their OpenGL support is pathetic. As for Intel GPUs, I wouldn’t support those, either.

The standard allows vendors not to implement it. Even if some do, you cannot rely on it.
Also recent Intel GPUs (and drivers) aren’t all that bad.

I thought viewport fx was supposed to have some performance significance to multires.

does anyone actually know if he was successful, and met his goals? The students didn’t do a great job of reporting this year, they did at the beginning then it seemed like a lot of them stopped reporting.

Zalamander, that Double Sided off it’s the BEST suggestion ever !

A huge improvement, 4 mil polys without slowdown !

A BIG THANK YOU !

Reports are all available here:
http://lists.blender.org/pipermail/soc-2013-dev/

Last report:

These last two weeks have been rather busy (since for all practical
purposes I have 3 jobs), but I will be able to bring everything to what I
feel is a satisfactory conclusion.

I always underestimate how challenging and time consuming writing is, so my
plans to do any detailed testing will have to wait.

I spent week 13 reviewing the patch for Viewport FX. I came up with a
to-do list of about 300 items. Fortunately most of those are more of a
“wish list” and the urgent issues are mainly missing features when running
on OpenGL core profile or ES.

Doing anything quickly with such a broad patch is difficult! Today I
started to act on a refactor I planned when writing the documentation
(essentially I was writing the documentation based on the refactor, not the
code as it stood at the time). It took several hours just to get
everything to compile again due to new header files and renaming and my
hands and wrists hurt afterward.

It is like trying to change the overall shape of a model after adding too
many vertices :slight_smile:

I’ll be crunching this weekend to have as much done as possible by the firm
pencils down date.
-------------- next part --------------
An HTML attachment was scrubbed…

We know the reports are there, but there were two elements supposed to be there: reports, and a wiki. Reports were filed, the wiki on the other hand wasn’t in the list with all the other wikis for this year’s project. As far as I can tell, this was the only project that didn’t add a wiki to the page with all the wikis for this year, that’s what I found odd. Not a show stopper, not “owed”, but it would be nice if all students followed the guidelines. The “proposal” is only valid before the project begins, and might change as time goes on, so is not a valid indicator of what the final project might look like.

I though for instance the Paint Tools project was handled very well, the student posted regular videos over the summer, was very active in the forum thread were he responded to requests and posted screenshots etc.

I want to make clear that none of this is supposed to be a direct criticism of the students or their actual coding work, as you can imagine I appreciate it all a great deal. I’m just discussing this because I care about this kind of info regarding blender development being more accessible to casual users.

As far as the Viewport in general, I use it for rendering a lot, so obviously I would like it to be as cutting edge as possible. I understand the idea of making Blender usable by people with outdated hardware, but there comes a point when this kind of drags development down.

If it was a case of having Blender in top shape and just maintaining a sharp tool without the need to make it sharper I could understand, but when we discuss openGL and general realtime rendering we touch an area where Blender lags behind a lot. Look at Viewport performance, look at things like alpha transparency issues, realtime shadows etc. These aren’t cutting edge features, they are things that should really work better right now. This stuff should be center stage.

Also, on the topic of Blender being accessible to users with old hardware, doesn’t Cycles require a very specific GPU brand and only current models?
You see then that the message is confused, one day we read about Ton having dinner with all the major players in CG, Blender being considered by studios as a serious tool, but then we still have part of the community arguing that we need to keep it 1996 style so that users with old laptops can render teapots? Couldn’t you have a “learning” version that links to some old build of Blender guaranteed to work on outdated hardware, and then just keep marching on with the more current build? Because you can’t have both.

Reports were filed, the wiki on the other hand wasn’t in the list with all the other wikis for this year’s project.

True, there’s not even the reports on his wiki. But I actually think it’s fine without, as I consider wiki docs end-user docs, and there’s clearly nothing to write down for end-users (yet). From the reports, I’d consider Viewport FX II successful, but unfinished. Not sure if there’s measureable performance improvement by now, guess you need to try out a branch build (or build yourself, there are just a few gsoc builds @graphicall).

I though for instance the Paint Tools project was handled very well, the student posted regular videos over the summer

That was great for sure! But how to apply that to jwilkins’ project? He didn’t do anything to UI or added obvious features. He could have done videos showing and explaining code and implementation decisions - techtalk for techies (not trekkies) with little use to the regular user and not much of a gain in information compared to the written reports.

I would like it to be as cutting edge as possible.

We all do :slight_smile:

doesn’t Cycles require a very specific GPU brand and only current models?

Not that I’m aware of? It can always run on CPU, and should work ok with most GPUs. If you want it to be really fast however, you need a recent nvidia card with CUDA support (OpenCL support lags behind in matters of performance AFAIK).

so that users with old laptops can render teapots?

The plan is to make OpenGL 2.1 minimum requirement in 2.7x, and possibly 3.0 in 2.8x:

And there was a discussion about 32bit / Windows XP support, here’s a quote:

One example for API not present in XP is : InterlockedCompareExchange64 ( http://msdn.microsoft.com/en-us/library/windows/desktop/ms683562(v=vs.85).aspx)
But that is actually not the problem! MS drops XP support next year, so XP users will get rare. Why should we support XP in blender 2.7/2.8?
32/64 bit portability is just more work to do. You loose speed in some places but you might be right. Just for portability this might be good. But on the other hand who uses 32bit applications in 3D?
It makes absolutely no sense to stay in 32bits when you can have more Memory in a 3D application

Actually, if you were looking for theViewport FX documentation

Now, that’s not very sexy, let’s look at what viewport FX actually is:

Viewport FX I proposal The work proposed here has two main goals. The first goal is to rewrite all the code in Blender that draws to the screen so that it uses a single higher level library that manages geometry and state efficiently. The second goal is to take advantage of the new layer of abstraction and greatly increase flexibility by making the drawing and compositing of the viewport programmable through some kind of textual description.

Viewport FX II proposal This is a proposal to continue work on updating the way Blender draws to the screen. Blender relies on an older version of the OpenGL graphics library that is no longer fully supported by hardware providers. For this reason, when new generations of graphics cards are released, Blender will not fully benefit from the increases in capability and performance. In addition, new mobile devices show every indication of becoming the primary means by which most people use computers. For that reason, Blender and its game engine should be made to work with OpenGL ES, which was designed for those kinds of systems.

In the short term the benefit to Blender should be increased performance for those with modern graphics cards. However, the primary benefit of this project is that Blender will be prepared for a future on faster and more mobile hardware. Additionally, future projects, such as refactoring the viewport, depend on having a solid foundation (this project itself originated from an effort to refactor viewport draw that was stymied by legacy OpenGL code).

So basically, while Viewport FX is going to go forward to improved Viewport at some point(some of the Viewport FX I proposal sounds really amazing, performance notwithstanding), Wilkins’ current GSOC projects are ‘cleaning out Blender’s basement and reorganising everything that isn’t garbage’.

Everything else can be found here.

okey so in terms of performance boost. What numbers we are talking. Is it possible to get close, to 50% boost?

Not even Mr. Wilkins knows yet, I suppose. He’ll need to finish up the ANGLE integration to see.

In the meanwhile, you could google for general benchmarks comparing Open GL 1.x and Open GL 2.x / ANGLE.

We’re going to being able to run directly in OGl or windows users must use angle?.. cause in my case Opengl apps runs a lot faster than the angle implementation (Both dx9 dx11 backends) and i’m running on an AMD card that is not too Opengl friendly.

I’m not sure why angle even exist, why translate Opengl calls to DirectX?

Major vendors should have drivers with enough Opengl compatibility to run applications without doing this stupid conversion, if you give them this “getaway” they’re not going to put much effort in fixing the real Opengl implementation.

:S

Not sure, but I guess this is going to be optional, and you can pick whatever is the fastest.

I would love if the Blender viewport incorporated some concepts behind Creation RTR.

https://vimeo.com/52547255
http://fabricengine.com/creation/rendering/

Not only the deferred shading and overall speed, but the rendering pipeline specified in XML and other flexibility enhancements it brings to the table.

that is awesome stuff in deed.

Does anybody know if Cycles will support mobile GPUs? My laptop never worked well with cycles. it was an intel I3 with ATI GPU, An average render with cycles could take more than an hour.

Now I have a ARM tablet, Last blender port worked “not so bad” on it. But i can just render using the internal blender render.

I think blender should not keep compatibility with old systems. I think It will be enough to keep old versions on servers, so every user can use the most powerfull version working on a hardware. For example, if i have windows XP i can use blender 2.4 and that’s enough for that computer. Or i have a tablet and i just want to render things and make game engine scenes with some simple shaders (that’s what i want), so i keep the opengl ES 2.0 version.

By the way, is there any version of blender that works on an old pentium 233 with no 3D GPU?

Thanks.