why is gl_select still used to select objects in Blender?

Yes, most probably a memory issue then. I could not reproduce the behavior where it constantly selects just one object though. Here it properly cycles through every object under the cursor. I thought it had to do with not computing depth information like glSelect does, but I haven’t checked it more in depth (no pun intended) to check if these depth values get used. Bottom line, looks like this could work relatively well after all so I’d better start implementing it properly :slight_smile:

My apologies for the late response, I was away for a day.

First impressions: I jumped up from my chair, and did a little dance. Truly, Psy-Fi: brilliant job so far. Selection is instant, direct, and suddenly Blender is completely usable for high-poly/object scenes! Rejoice!

For comparison I am using the ‘fastest Blender’ build by Alb3530.

For testing I included a model that very much resembles my way of working (though I use many more objects, it is reminiscent of my objects).

http://i39.tinypic.com/34ss60i_th.jpg

The model can be downloaded from:
http://www.spinquad.com/forums/showthread.php?26686-The-ufo-challenge-part-2

I cannot publicly share my version, since I did not ask for Deichmann’s permission. I converted the object with Accutrans, applied a one-level sub-division modifier (collapsed the modifier), and re-organized the file. Psy-Fi, I can share my version with you for testing, if you like. Just pm me.

Solid view mode

Total vertices: about 920.000. It runs absolutely smooth on my system.
Selecting objects in this scene takes approx. 5-7 seconds, and locks up Blender.

Your patch: instant selection. No lag at all.

http://i41.tinypic.com/2wf60si_th.jpg

Same object duplicated five times. Approximately 4.5 million verts. Orbiting is still relatively smooth (approx. 20fps).
Selection of an object can take from 14 to 22 seconds, depending on the spot clicked. Blender ‘whites out’ after about 4 seconds in Windows 7.

Your patch: again, instant selection.

The only caveat I encountered is that depending on the number of objects stacked under the spot clicked, it can take a couple of clicks before the correct one is selected.

Example:
http://i41.tinypic.com/1zcz6gh_th.jpg

Not a biggie, though, because: <alt>-clicking finally WORKS like advertised in these bigger scenes:
http://i40.tinypic.com/rvbr6x_th.jpg

I never was able to use this feature, because the lag would not allow for it. Just imagine not moving the mouse for up to 20 or more seconds - with a mouse it’s hard, with a wacom tablet nigh on impossible. Even ‘just’ 5 seconds was almost impossible.

This solves the selection issues completely for me. I do not mind the extra clicking too much, because <alt>-clicking becomes available now.

Excellent work.

I also tested vertex selection in edit mode on a moderately complex mesh.

http://i44.tinypic.com/33tgdww_th.jpg

Both versions of Blender, yours and the ‘fastest build’ were slow with vertex/polygon selection in edit mode. About 3-4 seconds.

Wireframe performance

Scene: subdivided section of model, increased vertices to aprox. 2.62 million vertices.
Animated to rotate and show fps in viewport.

http://i42.tinypic.com/2q82bg7_th.jpg

Some interesting results follow!

Animation fps:

  • Your patch: 39.4 fps
  • fastest build: 40 fps

With your build I can SELECT IN REALTIME while it is rotating at 2 rotations a second!!! Blender does not care. Rock solid selection.

The fastest build just locks up when I tried to select any object in wireframe mode. It crashes.

To make certain this is not a problem with that specific build, I tested the official build, with the same result. Both non-patched versions (the fastest build and the official build) stop working. I waited 5 minutes, and both still did not respond.

Ergo: it seems your patch actually resolves a serious glitch with object selection in wireframe mode in Blender.

Switching to edit mode, even in wireframe view mode, causes the viewport performance to grind to a halt, though. No more than 1-2 fps, and selection is very slow and laggy.

All in all, count me a happy camper. I hope the armature selection issue can be resolved as well.

Again, THANK YOU! After almost of two years of frustration this is now being ironed out, and it seems I can continue my ‘age of sail’ project soon. :slight_smile: Thanks, guys. Thanks Psy-Fi.

The following comments are just added out of curiosity:

Other misc. observations about opengl performance in Blender

  • the viewport performance in above scene with one object in edit mode is quite bad. Just a couple of frames per second. In sculpt mode it is completely smooth (as expected).

Question: can sculpt mode’s opengl performance enhancements be added to solid mode? (Edit mode probably not, but I can’t see why object mode in solid view wouldn’t function with the same enhancements)

  • The original Deichmann saucer object with a sub division modifier applied at 1 subd level causes huge drops in viewport performance. This does not make sense to me at all - in object mode it merely is required to display the static model, and performance should (in theory) be identical to a collapsed modifier. Only in edit mode would I expect such a drop in performance.
  • What is worse, even linking the same object with the sub division modifier applied at 1 subd level causes almost the same much reduced viewport performance. It is marginally faster, but still nothing compared to a collapsed static version.
    This seems even stranger to me.

Question: what causes this strange drop in performance when dealing with objects which have a sub-d modifier applied? If anything, other applications seem to use a more optimized approach for these type of objects. I have only seen this viewport performance issue in Blender in relation to sub-divided objects. Especially animation of characters is affected.

I made a quick scene with barely over 1,4mil poly’s across 9 objects, of which 1 had a SubD modifier,
but I could definitely notice around 2 sec difference between official and patched build, sweet!

whilst selecting with my wacom the non patched build often automatically started translation action,
as if I’d had done a click-drag rather than just clicked, so another add win!
that is to say, that wasn’t the case for patched build. :slight_smile:

Agreed, same for me - I hardly experienced those accidental wacom translations with the patched build. Good news for wacom users.

Tested with geforce gtx560 ti and windows 7, it seems working really good,thanks Psy-Fi.
Anyway it’s bit odd to note that on my system selecting an object(for example a scene with 121 objects for 11 millions faces )it’s not so slow as in Herbert123’s experience,but it’s more or less 1 second to wait(the build with the Psy-Fi’s patch is realtime,no delay at all)
So,as I understand ati cards have poor selecting and nvidia cards poor doubleside shading?
Not a great thing for blender users, I hope developers can find a solution to make blender works smooth with modern hardware(it’s clear that nvidia or amd don’t have any interest in making blender works smooth with their cards)

I don’t believe that they “Don’t have an interest”…Having closely watched the debate over OpenGL evolution in recent years, the opinion that I have formulated is that the resources required to maintain a compatible-with-every-old-feature OpenGL are definitely prohibitive. That’s why, understandably, a lot more effort goes into optimizing the modern hardware features like occlusion queries and the recent OpenGL versions have a “core” profile in which a lot of the deprecated features, like selection, are not even there. And seeing the difference yourselves, would you like them to be there? :slight_smile:

OpenGL purists like myself would love to have blender be a modern OpenGL software but, unfortunately, this would require a lot of rewriting. Plus there are always issues with drivers. I remember when ATI users couldn’t sculpt because of issues with OpenGL. It would break blender badly if something as basic as selection couldn’t work due to glitchy drivers. So an independent ray casting-based
backdrop would be nice. By the way, sculpting already has the necessary acceleration structures to make selection faster but for objects we would have to regenerate or maintain them(memory hungry!). Maybe it won’t be as prohibitive though? Edit mode uses a different scheme for display. Unfortunately making this fast is hard because we are dealing with alterable geometry here, so essentially the challenge is memory management, which becomes very complex when vertex buffer objects (the reason for fast drawing in other modes) come into play. I haven’t looked into this personally though Campbell did a number of optimizations there these last few months.

Still, occlusion queries are, in my opinion, a good way to make object selection work in a hardware accelerated way. It’s a “hack” but a nice one. As stated in this article by Ian Cantley:

http://developer.nvidia.com/node/47

'Fortunately, the GPU excels at analyzing a 100,000-triangle mesh. It does not have an explicit function that reports which mip levels will be used to draw a mesh, but it does implicitly perform this analysis every time that it draws an object. [1] The result is not explicitly available in the form that we need, but it is implicitly there in the pixels that appear in the frame buffer. The trick is to somehow analyze those pixels and convert them into a result of the form “mipmap levels 0 and 1 are not required.” ’

Substitute mip-level for selection and we have essentially the same thing. (Note, his wish about a mipmap-measuring function was fulfilled later :slight_smile: )

Oh, and one last thing: about the modifier performance. I am not sure about the specifics here but indeed this sounds as if it could be optimized somehow, though I am not too sure about exactly how.

This is pretty great news. Any reason that this couldn’t be polished up and merged into trunk?

sorry if this sounds stupid (i dont know openGL one bit)
but if drivers are the reason for the old opengl-functions used, why is one of the gsoc-ideas this:

Increase Viewport Speed and Eliminate OpenGL cruft - right now we use some older opengl stuff that is quite slow, (opengl version 1 even!) so eliminating these instances and switching to more modern apis could give large viewport increases. (Suggestion: more compliancy to OpenGL ES for future tablet ports). Particularly interested in sculpt for matcap.

There’s no controversy there, though writing these two statements in the same paragraph may seem like the one implies the other.
“This would require a lot of rewriting” is why it is proposed for a GSOC rather than as a weekend patch.
Just pointing out that for basic operations, like selection things could go very wrong. Actually there’s a fundamental difference between drawing stuff and selecting stuff. When drawing fails it’s annoying but it’s not that fatal always. Bad selection however can really make things unusable.

Oh and another thing, just back after meeting. Looks like glRenderMode(GL_SELECT) may be hardware accelerated after all according to some of the devs. I haven’t managed to confirm it officially however. Still, OpenGL 4.2 core spec misses glRenderMode completely . It may be a driver bug though after all.

For the record, I do hope everyone here uses recent drivers?

I agree with you Psy-Fi,my only complain is that,compared to the past,it seems graphics cards play a big role in making the user experience(the Blender user experience)good or poor(and it’s not so easy to avoid this,you can’t simply choice one model and hope it will work best for everything)
For example,look at sculpting,for a lot of people sculpting performance it’s not good,but for me it’s good(more than workable at 100 millions if i disable doubleshading),for others is even better.
Or look at rendering(cycles),people with new nvidia graphics cards have good experience,but the same guys have a depressing experience with double shading in the viewport(for example I made a test and my gtx 560 ti gives me the same speed as my old 8600 gt in viewport if I keep double shading on and is almost 2 times slower than my old 9600 gt)
Or look at ati,fast open gl viewport but slow selection.
There are professional cards,but for Blender,I don’t think you can have a lot of support,so I’m not sure if it works to pay for these products if you plan to only use blender(and of course it’s not a good thing for the blender foundation asking the user base to pay a lot for a graphic card)

Newest drivers are installed. And this slow selection problem not only affects Blender, but also Maya and other software:
http://area.autodesk.com/forum/autodesk-maya/installation---hardware---os/slow-component-selection-in-maya

After reading through a multitude of threads the last couple of years, I have come to the conclusions that:

  • depending on drivers and hardware used: erratic behaviour.
  • ATI cards experience this selection lag issue more than nVidia;
  • however, it is also experienced by nVidia users. Threshold seems to be higher, though, and lags are not as extreme;
  • ATI’s ‘professional’ workstation cards (Firegl, Firepro) experience the same issues (my own experience as well);
  • gl_select hardware support might be present or not, depending on workstation drivers, consumer driver version, etc. etc. Trial and error, and a lot of confusion, even among opengl devs.
  • gl_select as a method for selection should be avoided, because a) it is deprecated, and b) (hardware) support is erratic, and c) it is very slow.

I performed some additional tests, this time with an older mobile nvidia workstation card.

Machine: Dell XPS Core Duo, 4GB, Quadro FX 2500M 512MB, Windows XP 32bit, Quadro driver 180.84 (old driver, and the only one I was able to install on this older hardware).

Compared to the 5870 the video hardware is OLD. However, due to the opengl performance drivers, though solid view mode is quite a bit slower, wireframe mode is still comparable to the 5870. It is difficult to make direct comparisons due to the distinct difference in hardware, however.

Blender versions compared:

  • 32bit official build
  • Blender 32bit occlusion build

I used the same Deichmann UFO scene - almost 1 million vertices. No modifiers.

Observations:

  • the selection lag still exists in the official build in solid view mode - ~0.9 second. Compare this to the 5-6 second lag on my much faster desktop
  • selection lag is reduced to ~0.3 second in the occlusion build
  • in wireframe mode the patched version selected objects within ~.1 sec (as fast as the framerate).
  • in wireframe mode the official build took almost a second to select objects ~.9 second.
  • gl_select seems hardware accelerated on the 2500M.
  • no crashes when selecting objects in wireframe mode (identical scene causes crash in both non-patched Blender versions I tried on my desktop) This is probably an ATI driver bug (might be caused by the use of deprecated gl_select in this mode).

So, even on an older Quadro FX2500M system with hardware supported gl_select the occlusion patch speeds up selection. Especially in wireframe mode the selection felt immediate in the patched version, and felt slow in the official build.

A marginal speed boost for the direct user experience, but still quite noticeable.

Resolution was identical to desktop version: 1920*1200.

A four times duplicated saucer caused these selection time differences between the two Blender versions to blur, though. As you stated earlier, Psy-Fi, your occlusion method is fps dependent, so selection performance difference became hard to detect.

I don’t think it’s a great idea to use any GPU dependent selection method.

If you’re going to work on this, please consider using ray casting instead. With it, you can select even larger meshes (using acceleration trees) without noticeable slowdowns.

I tried this approach a few years ago and results were not only better (objects with > 5 million polygons would be selected instantly), but the code was a lot nicer and more elegant as well…

And most importantly, it would make selecting objects quick for all Blender users, not only those with the right gpu inside their systems.

(unfortunately I don’t have the code anymore as it depended heavily on BMesh, which was far from finished at the time - it’s gone now, sorry)

Nevermind…

previously in this thread I proposed ray selection, main problems with ray cast…

  1. it assumes a ray tree is built and in memory.
  2. it doesn’t work for border/lasso select, or put differently, lasso and border select would need to be especially supported, which is possible but quite a bit more work then simple ray cast, maybe I miss something and its infact easy with some trick, but until someone explains me how, Ill assume its not trivial.

If theres no ray tree falling back to some other method can work ok but then you dont get the benifits of ray tree select.

Just thinking out loud, but it seems to me that it’d have to be more like a cone or box intersection test rather than a ray, since you want to be able to click “near” something and have it select. (And if in a perspective view, the box’s planes would need to be angled to include a larger volume as it moves away from the camera, since you really want to select in screen-space I think? Sculpt redraw does something like this.) So if my thinking is correct, this gives you border select, and lasso select could maybe be done by decomposing it into multiple volumes.

Looking forward to trying the GL occlusion patch though, since I’ve definitely had GL_SELECT issues. Thanks Psy-Fi :slight_smile:

New version, supporting armatures and a bit more cleaned up. Exposed an interface that switches between occlusion and glSelect method. There is a hard limit on 200 objects now but i will work on it some more.

http://www.pasteall.org/27042/diff

Todos:
-remove limit
-support for manipulator widget too (though it contains so little geometry that it should be inconsequential which method is used. Still…)

For me, selection works a lot faster than before. Selecting objects at around 1 million doesn’t lag as it used to and is almost instantenious. I’m on win7, using mobility radeon 5650 with Catalyst driver 11.9 and using this build http://graphicall.org/750

I didn’t encounter any problems during my rather short testing.

  1. In my patch I just loaded / unloaded the tree on entering or exiting mesh edit mode. Seemed to work pretty well and should work even better with partial (un)loading, depending on view states and what not. That would probably be the hardest part to work out for this selection method.

As for global object selection, I had simplified “cage” meshes for each selectable object that didn’t take much memory and could be loaded / unloaded for large scenes as well (depending on their visibility inside the view).

  1. that’s actually quite easy:

Your thinking is correct. It’s real quick with a few simple plane tests. Especially when selecting from an object with tens of millions of verts it’s (with an acceleration tree) insanely fast compared to gpu based selections.

There were some small issues with my patch (like planes larger than the screen not being handled correctly), but I’m fairly sure that could have been fixed one way or another…

OK, done, now occlusion based patch works for everything (manipulator works and limit on objects removed) BUT! (and this is a big but) we can’t have depth information from this method, meaning that the first visible object does not always gets selected. In the manipulator’s case this may be annoying since the widgets tend to overlap frequently. Selecting objects this way can also be annoying too (though we can usually bypass by alt-selecting) so I will toy with a glReadPixels approach to get the depth back, or maybe enforce use of glSelection, at least for the manipulator widget. Note though that I expect that glReadPixels will decrease performance. It remains to test just how much. (Maybe it will be better than the previous extremely laggy behaviour)

For people experiencing big lags though I expect this patch will help. And of course using it is fully controllable through user preferences.

http://www.pasteall.org/27077/diff
build on graphicall: graphicall.org/747/

Next target: test glReadPixels for depth. If all else fails maybe depth information can be extracted by plain old camera-to-object or to-bounding-box distance. Not as exact but a minor improvement nevertheless.

I like the ideas on ray intersection posted here. Stukfruit too bad about the lost code… Ray intersection is a new thing for me, I don’t know if I can be of any help there. It sounds like quite some extra memory use though, that is my only objection. Since we are talking about heavy scenes anyway won’t this affect memory a lot?

<edit just found out there’s a problem with the manipulator (needs -d on command line to work correctly), investigating>
<edit2 problem found, patch should be correct now, uploading correct build>

<edit3> There’s another way to get the closest object to scene: draw twice, once with depth test enabled, to get scene depth in framebuffer and second with depth test equal. This way, only the closest object will get accepted for selection. Drawing the scene twice may seem like a compromise but I think that the alternative of waiting for 30 seconds to get that info is much,much worse :slight_smile: .

Removed because I was not understanding the problem.