Blender and quadro. Viewport FPS benchmark

Okey so as we know quadro cards are for viewports however… back in the time GTX cards make so much power that buying quadro was not really a good idea. I know that GTX out perform quadro in rendering the finall render.

However Few weeks ago. some guy on facebook writed a comment that he can handle 60 milion verts in blender viewport with quadro without a problem… And that makes me intresting. Because my job is mostly modeling. and rendering goes on CPU because of complex big enviroments or models. And here is my question

If u have quadro, on what poly count your viewport starts to lag. or u have some other good graphics card what we are looking for is the best Viewport performance setup

So basicly this topic would be that u Add cube and subdivide it by modifier with catmul clark and tell how far u can push before it start to lag

Moved from “General Forums > Blender and CG Discussions” to “Support > Technical Support”

but diude it is not problem. It is discussion about what is the best setup for viewport… Similar topic to the Benchmark with cycles…
moveing this to support u are killing this… acctualy.

hi on my quadro fx3700 512 mb vram with standard cube not subdivided with subdivion surface modifier in view mode set to 9 (3mil 200k triangle aprox) with VBO enabled, double sided disabled, solid view mode the viewport is very relaxed.

with subdiv modifier set to 10 in view mode(12 mil triangle aprox) viewport becomes very laggy.

quadro k2000 would be much better I think

yeah FX quadro cards now have same performance as GTX cards… but those new with K some people say that they are awesome.

Hi I thought I dropped by and shared my viewport experience with my new GTX750 TI windforce 2GB OC edition.

The card is much quieter and power efficient than my old quadro fx3700 512mb V-Ram.

The downside with the GTX 750ti is that it does suffer significantly copared to the quadro card. Maybe it’s the drivers. That might get better over time.

With the GTX750ti the standard cube viewport is comparable to the quadro fx3700 only at subdiv level 8. That’s around 800k triangles.

So a 4 times viewport degradation for the geforce vs the quadro fx.

Depending on what you do it might be a better idea to get a quadro if you absolutely have to go beyond 1 mil triangle count on your models. Looks like the 256 bit memory interface on the quadro does make a difference vs the 128 bit on the geforce. Besides these cards are optimized for working with complex geometry.

Playing with the antialiasing settings doesn’t seem to influence the viewport performance much.

I would be curious if someone with a geforce gtx 700 series card with 256 bit memory interface can make a short test and report back here. Just say how high you can go on the triangle count on a single cube.

Did a quick test with the GTX Titan: Subdivided cube to 50 million triangles.

With VBOs enabled, Double Sided shading deactivated, applied subd modifier and Outliner disabled : A buttery smooth performance! No lag.

But as soon as one of the just mentioned settings were changed, the viewport became laggy.

hey thanks for doing the test ! very usefull !

The only quadro card that’s comparable to the price and specs of the gtx750ti is the quadro k600 but it has only 1GB VRam !

I would be curious to see how it compares to the gtx750ti !

It most likely would be slower.

The benefits of quadros are stability (through lower clocks, ecc-vram), extended support, additional features such as quadbuffer stereo, 10 bit color or screen mosaic. “Features” such as double sided shading have been deactivated on GTX cards, so it would seem as if the quadro was faster. But, as in blender, disabling double sided boosts the perfomance of GTX cards significantly.

Keep in mind that the quadros are basically the “same” hardware as the GTX cards, usually they have a slower clock and more memory. It is mainly the drivers that make a difference.

i have a quadro k4000 at work so if you are still interrested in a comparison let me know what i should test and what settings is should change.
i have to mention though that the titan will perform almost the same in most areas because it is not crippled like the gtx models.
i have some benchmarks laying around to compare… ah yes here they are…

Attachments

benchmark.zip (67.5 KB)

I did the test with gtx750Ti, as for reC it’s perfectly fluid at 8 subdiv (800k tris) and becomes a little laggy at 9 (3000k tris).