Viewport FX III

Google Summer of Code 2014 – Viewport FX III

OK, so reworking the graphics code for Blender turned out to be an absolutely huge project, and this is going to be months 7-9 (I hate to say 3 years because I haven’t actually gotten to work on it for 3 man years).

Here is the proposal:
http://wiki.blender.org/index.php/User:Jwilkins/GSoC2014/Viewport_FX_III_Proposal
My main goal this summer is to integrate as much as can be integrated into the master branch. This requires a significant amount of code review, profiling, and testing. The profiling and testing is why I’m posting here.

Last year after Viewport FX II was done I had a version of Blender that could draw using both OpenGL ES 2 (Think Android) and OpenGL 3 Core (Think modern graphics cards). It also ran on Direct3D through Google’s ANGLE library. In the coming weeks I’m going to built a system that will let myself and volunteers test this code to make sure it outputs the same pixels as the legacy OpenGL code. In the mean time I’d to get volunteers and test files. I’d also like to see if anybody has suggestions for refining my test plan and making sure this is efficient and not overwhelming.

Besides testing for regressions (misdrawn screens), I’d also like to gather files that demonstrate the problems with viewport performance in Blender so that I can compare the performance of the new and old code and remove any bottlenecks. This will come later, as I’m pretty sure that the new code is actually slower, but it really needs to be correct before it gets optimized.

I’ll add more details to this post as my plan gets revised.

u have me :stuck_out_tongue:

The link that was provided doesnt allow us to view it… I remember reading your proposal here though – http://wiki.blender.org/index.php/User:Jwilkins/GSoC2014/Viewport_FX_III_Proposal

Im looking forward to this… and would like to volunteer what i can for testing if needed… We tend to push blender quite far (think hundreds upon thousands of objects, dupligroups, instances, millions upon millions of polygons etc.etc.) mainly for archviz work. So if any stress testing is needed for large scenes we can help out there.

And About the “custom modes” for Viewport, it was dropped? Make sense that is time to have the last 2 GSoC in master but it was a really cool (and useful) idea. Anyway count with me for some tests

Good luck :slight_smile:

Optimizing viewport for Blender is probably one of the most desired and useful projects ever.

OK, now linking to the proposal on the Wiki. Not sure right now how to make the one on Google’s page viewable.

I’m am definitely looking for scenes that users know to perform slowly. Although it might be easy to produce a scene that would simply push the limits of a computer, so I guess it would be more interesting if the scene is unreasonably slow for what it is (which unfortunately, is a difficult thing for a normal user to know).

What I want to do first is more boring: run through a set of tests looking for visual differences. I can do this on my own machines, but that doesn’t mean much.

That is in what I call the “extended proposal” in the second part of my proposal. The prototype hooks are in the code I’ll be testing and optimizing.

So basically I can give you a model with high polycount and subsurf modifiers everywhere? For example

With the work currently being done with OpenSubDiv, I probably shouldn’t be worrying too much about the subsurf drawing code since it is being optimized by Sergey.

Scenes where selecting an object takes a long time would be helpful though. So would cases where GLSL materials slow things down.

So basically scenes with a lot of grass.

Ill try and organise to get one to you shortly… wont be one of our production files but it will be just as slow

http://www.pasteall.org/blend/29585 it takes about 4 seconds for me to select anything in this file

windows 7
GTX 560 ti
FX 8350
1600x900 monitor

Testing with vanilla blender:

Takes a little longer for me. It took about 3 minutes to load up the file though.

This is a good file on my machine for rendering slowly too. There are only 360k tris but renders super slow. If it was one object with 360k tris it would be fast. There really shouldn’t be a difference if its 25,000 cubes or a single large model.

Thanks for the test case.

Yeah its the biggest problem when your trying to simulate a lot of ridged bodies… You can get about 8k objects before blender starts slowing down. It takes longer to navigate the view port than it does to bake the simulation lol

Yeah in our usual scenes we have about 10,000-20,000 selectable objects… and about 150,000-500,000 total objects (including dupligroups / particle systems etc.etc.etc.)…

How do you tell if slow selection is a blender limitation or just crossing that threshold to running out of memory?

I have a couple of scenes that are mysteriously slow for selection that i could share but am not sure which it is… pesronally I don’t think them that complex so i don’t @think@ it;s a ram issue…

Ram is fine… we usually have about 6-8gb of system ram free… as well as everything being rendered on gpu.

EDIT : i should explain that we make significant use of dupligroups / instancing / particles to generate huge amounts of detail with not much ram being used.

Yep, I found that instancing may help ram but it didn’t change teh selection issue one bit… (my slow scenes all have lots of instances /particle objects come to think of it)

Any news about this project? When can we test some experimental builds with it?

Monio; there has been some commits.
http://lists.blender.org/pipermail/bf-blender-cvs/2014-June/065507.html
http://lists.blender.org/pipermail/bf-blender-cvs/2014-June/065459.html

There’s a nice bunch of other ones too, but they’re mostly comprised of boring compiler work to make sure that he has something to present to the core devs.