Two GPUs slower than one with Cycles

Hey there,

I am on Linux, Ubuntu 64bit, latest blender, cuda.

Was running on my 780ti and remembered that there’s old 560ti in the shelf so plugged that in expecting to get some speedup (even small one would not hurt) when baking and rendering with Cycles. In reality it turned out that running on both GPUs - render and bake times are even slower than on my 780ti alone.

Am I wrong in thinking that old 560ti should add at least a little bit of GPU power rather than taking it away? Puzzled

Thougths?

What are your tile sizes? if you are mixing cards of significantly different speeds… the tile sizes need to be smaller… otherwise the 780 will finish… whilst the 580 is still computing that last tile.

Hi, there was a bug report about this problem but it is only for preview render, F12 should work.
I think the 780Ti is about 3-4 times faster than the 560Ti, you should have at least 4 tiles.

Cheers, mib

My tilesize is set to 256x256, which for 780ti gave the best results. Might be too big for 560ti though. Will play with different tilesizes then and see whether it gives any speed up.

Interesting, however i am not using F12 to render, i use Cycles bake. Wondering whether this is affected by the same bug or not.

Hi, made some tests with baking and my two cards are reach ~200% bake performance as I have GTX 560Ti 448 and GTX 760.
They have nearly same performance.
Here is my test file: http://www.pasteall.org/blend/31683
The bug report is may a different problem, I am not sure about.
Bake should be similar to F12.
https://developer.blender.org/T41830
I can try to speak with the Bake developer but iirc he has not Nvidia cards. :slight_smile:

Cheers, mib

Hmm, alright, thanks a lot - I will plug my 560ti back and will re-do the tests. How did you come up with tiles 240x270? Why not 256x256 but exactly 240x270? Is there some math behind it?

EDIT: Oh, i guess i understand it’s because you do the render and the output ratio is not a square. I am always baking therefore got used to square dimensions that multiplies/divides by 8 :slight_smile:

Talking about Bake developer - one thing that i am really missing is ability to turn off viewpoint dependant render preview in viewport. Instead - would be great to have ability to see exactly the preview of how will it be baked (each individual face is baked not from viewpoint’s perspective but from it’s normal perspective). If you have him on IRC - would be great to toss this idea. Many game developers do baking and not all game engines deliver great lights etc… so it is all being baked into diffuse. Adjusting lights and not seeing in viewport render preview what you will be getting is kind of not cool. So to position all lights and achieve perfect setup - it always goes like : tweak lights, bake and see, retweak lights, bake and see… instead of just moving lights and looking live preview in viewport. (this is of course only for texture baking where you need non-viewpoint dependant textures with lights, gloss etc). I’ve seen a patch where baking is allowed to be viepoint dependant, but in my case - i would appreciate non-viewpoint based (normal based instead same as bake) viewport preview.

Is there some math behind it?

Yes, I use Auto Tile Addon, it set CPU and GPU tile size automatic.
Simple math, with your setup may 4 tiles is the best for performance as your cards are 3:1 performance wise.
With 256 you often get some small tiles and if 780 is on a small tile the 560 start a big tile it break your performance.

http://wiki.blender.org/index.php/Extensions:2.6/Py/Scripts/Render/Auto_Tile_Size

Cheers, mib

Hmmm, makes sense, however not sure whether this is applicable to my case - i was baking 4096x4096 and tiles were 256x256 so theoretically there are no smaller and bigger tiles since 4096/256=16 and there are no leftovers.

The 780 has 2880 CUDA cores, the 560 has 384… Do you really want to limit the memory access to only one GB instead of three? (remember that with multiple GPUs the memory is not added, but gets limited to that of the smallest card).
Is it really worth the trouble? I don’t have the answer…
But the one thing that will for sure make a difference in terms of overall performance is if you use the 560 to drive your monitor(s) and the 780 for rendering only. That way you don’t waste any RAM or processing in refreshing the display (and will allow you to keep using the computer while you render in the background…)

Good point. Seems like getting another 780TI would be the best option here :slight_smile: Thank you