Cycles GPU render speed drops extremely when increasing render size

Hi!

EDIT: <solution>The T-panel in the UV/Image Editor window takes up a lot of resources and can freeze the GPU. Close the T-panel when rendering.</solution>

I have a less than state of the art GPU (Asus NVIDIA GeForce GTX 560 Ti) (I think it has 1GB of memory)
When I render my fairly complex scene in HD (1920x1080) using 256x256 tile size I get approximately 20 samples per second when I render.
If I increase the size to 170% I still get almost 20 samples per second when I render. But when I increase to 175% I only get about 1 sample per second. My goal was 200%.

My guess is that the memory in the GPU runs out. But I thought that that would simply cause a crash with a memory error message. Is there a way to monitor GPU memory? Or see how much memory is required for a scene in Blender when rendering?

Please advice.

Also, if there are any up to date guides for buying a new GPU for Blender rendering, please give me a link to those. :smiley:

For instance, is the Titan Z a good choice, or is it just madness to pay that much for a GPU?

Best regards, Erik

Thanks LK84!

This sounds really interesting I’ll try installing an older version and see what happens. Also install GPU-Z to monitor my VRAM.

Regarding an upgrade. It’s always a matter of performance per buck. Would you consider the GTX 970 to be a good choice? I’m just an enthusiastic hobbyist and the 3.5-4 GB of VRAM should keep me happy for a while. :slight_smile:
The old Titan with 6GB seems pretty pricey for just 2 more GB. And I can’t really motivate the cost for a 12GB card…

BTW, you mean QUAD GTX 590’s? Pretty sweet setup but I guess that this performance bug really annoys you. Hope they can fix it in a later release.

Br, Erik

Hi, GTX 970 is best bang for bug, the 980 is not much faster and the Titans are, well, expensive.
Amartin had linked to my benchmark, take a look.

Cheers, mib

Hi again!

Thanks for all your assistance in this strange issue.

The card I have has 2GB of memory according to GPU-Z. And it never seems to max out. At about 1650 MB of used memory, the GPU load drops from 99% to almost nothing.
I read somewhere that a part of a GPUs memory can have a slower memory bus. (Example: The GTX 970 apparently only has 3.5 GB of fast memory. The last 500 MB are really slow.) Could this be the case for my 560 as well? That the last 350 MB are really slow? That might explain why the GPU load drops since the memory becomes the bottle neck.

I have also tried earlier versions of Blender.
2.71: Little or no difference.
2.64: Couldn’t get my scene to work properly (I’m using a lot of anisotropic shaders).

Still planning to go for the GTX 970 for ~€300. I don’t really need the marginal extra speed from the 980 and when (and if) I hit the VRAM limit I’m sure there are cheaper cards out there with +6 GB of VRAM.

Any vards apart from these I should consider?
Titan Z: 12 GB, €1450
Titan X: 12 GB, €1100
Titan Black: 6 GB, €800
GTX 980: 4 GB, €550
GTX 870: 4 GB, €300

Br, Erik

More about the 3.5/4 GB VRAM of the GTX 970:

No i meant dual GTX 590’s, each GPU is counted separately, so 2x GTX 590 cards = 4x GPU’s.

When it comes to performance per $, i think GTX590 are among the best, a single GTX 590 card does the blenchmark in 63s while a 970 card uses 82s
i just recently purchased another one from ebay for just 137.5$+shipping (one of the cards stopped working), however they have only 1536MB ram and are difficult to keep cool.

But if this issue could be solved, then they would be far more usable.

At second glance it doesn’t seem to be a VRAM issue. When I run at only 50% it goes to +1600 MB as well. And it gets no problems…

But… it might be a tile size issue.
If I change the tile size to 64x64 it seems to work fine. I’m 25% into a 200% render and the time estimate is at a comfortable 2 hours.
Large tile sizes seem to make my graphics card “freeze” somehow. Anyone heard of this phenomenon???

/Erik

I have not experienced slower rendering with larger tile sizes, only slower display updates - that is when rendering with all available GPU’s.

BTW did you test VRAM consumption when rendering just the default cube?, i’m really curious what result you get.

I run 2.73a now.

Background usage with GPU-Z running: 134 MB
Idle usage with Blender running default startup file: 202 MB
Cycles render of default cube: 632 MB

/Erik

Hmm, so it’s:

430MB for just the cycles renderer with your system
857MB with my desktop (GTX 590)
135MB with my laptop (GT 750m)

Could someone please try rendering the attached file (as is) and tell me about the results? Render time etc.

I have now deleted 99% of my scene and I still have issues rendering at 200%. :eek::eek::eek:
Is there something completely wrong with my settings?

The setting are:

Cycles
Resolution: 1920x1080, 200%
Tile size: 256x256
Samples: 500

On my machine the render works normal for the first ~200 samples on the first tile (3 seconds) then it slows down and runs one sample per second.

I feel I need to get to the bottom of this before I invest in a new GPU.

Thanks!

/Erik

Attachments

strange.blend (793 KB)

EDIT: The suggestion by eppo of closing the T panel in the UV editor solves this completely.

I did some testing with the file you provided, here’s what i found.

at 100% it renders in 26s
at 170% it renders in 218s
at 180% it renders in 480s
at 200% it just takes to long for me to wait

for some reason the scene renders disproportionally slower with each increase in render size, and the blender GUI also becomes increasingly unresponsive - even though i have a dedicated display device (a GTX 550 ti)

I even tried deleting every single object from the scene and adding some new basic objects - but with the same result.

Finally i opened a new scene and appended all the objects from the file you provided, and set the samples, tile size and render resolution to be the same, now it renders in 17s at 100% and 51s at 200%.

with this new file i can even render at 1000% with no problem, and the blender gui is smooth as it should be (with a dedicated display device that is).

Attachments

new_strange.blend (144 KB)

http://www.pasteall.org/pic/show.php?id=85976

Since there were no textures i added one here and one HDRI in World.
I have nv GTX 560, linux, 64 bit. Sweet spot for tiles is either 128 or 196 squares - i have tested this by dropping in scene cube copies until there is some 8000 cubes. If these are separate objects there was no difference in tile settings - i went up to 1k by 1k. Rendering goes much faster if i join all cubes into one mesh; then tile size starts to count and 1k tile takes like ages to render compared to 196.
Another is - do not keep T panel open in UV editor where render result shows up while rendering. All Scope and such calculations are done on each render update and these are darn costly.

Yeah… Sorry about the textures. Didn’t think about that.

What do you mean by sweet spot of 128 or 196? Do you mean tile size 128x128/196x196 or do you mean that I should recalculate the best tile size to get exactly 128 or 196 tiles in total for the entire image? :confused:
I was under the impression that 256x256 was a pretty good number for GPU rendering.
I don’t quite understand what you did with the 8000 cubes.

But nevermind eppo. You quickly solved my problem by tipping me of about the T panel. I had no idea that was a big no-no. I never look at it anyway. And now that I closed it everything runs without hick-ups. :D:D:D

Now I need to reassess if I’m still going to indulge myself to a new GPU. :wink:

Thanks, eppo!

/Erik

Yep, i did mean 128x128 or 196x196 is the best for my card (yours has more power though). I usually leave on Auto for Threads - if i’m right this relates to CPU rendering and if there is some CPU involvement while GPU render i’m still limited to 2 cores only :D. Blender manages these fine.