CPU, Graphics Card and CUDA

Hi blenderartists!
I’ve got some questions regarding hardware for a new PC.
If I use my graphics card to render, does my CPU also render or would it be “free to use”?
Would a build with a R9 290 work well with blender? I really hadn’t much time yet to search in depth, but it seems that I would not be able to use Cycles Render with the R9 290? Another argument someone mentioned was, that CUDA “doesn’t do much”, that the main work would still be done by the CPU? Is that true?
Another aspect is, if I render my projects with my GPU, it is stated by the manual that it couldn’t refresh my monitor. That matches my experience, but at the moment I only have 1 monitor, so I mostly leave my pc rendering anyways.
What would happen if I have 2 monitors? As I would suggest, my other monitor wouldn’t refresh then aswell, any experiences? If I had an CPU with an integrated graphic chip, would that be able to refresh the second monitor, or would the CPU also work for the render?

Regards, Rhynden

Well you GPU would be needed to do your rendering, so couldnt play games while waiting.
If you got one graphic board, then still the graphic board would udate the screen i think, it does do it all, but it just has less memory left.
For a GPU render a whole render scene needs to fit in the GPU memory.
If you want a dedicated GPU just to do rendercalculations only then you might use a motherboard with a simple graphic card, then use two graphic cards, where the added one is your super wauwie beefed up graphic card.

I dont know how super wauwi the card is your mentioning, does it support cuda ?.
Usually the full cuda implementation is only on Nvidea cards and derivates, but hope is not lost.
Actually this weekend this was the big news "http://www.blendernation.com/2015/03/24/amd-patches-cycles-for-opencl/ "
Apperantly AMD boards use openCL, and their working on improvement for cycles, i dont know if that affects your card.
Its a part you need to research.

Also keep in mind that a GPU only does do the final phase rendering.
It doesnt make several types of simulation faster, like cloth sim, explosions, etc for that its still CPU that is needed.
I own an 8core i7, but still wished i had a recent nvidea graphic card…but its not possible to replace that in my laptop.
It might be wise to look for desktop models, models that you can easily upgrade if budget permits.

If you render with GPU your CPU is free to use. I often render stuff with the GPU and at the same time get a particle simulation calculated on th CPU over night.

Regarding monitor refreshing: I have a 4 monitor setup with one gtx 670 and a quadro 2000. The refreshing slowdown is allways there even if i render only with one gpu.

CUDA does much. More cuda cores mean faster gpu rendering.

Your UI will slow down if any monitor is physically connected to the CUDA-rendering GPU. If all your monitors are connected to the integrated GPU, then it won’t. Another option is to plug in a second GPU to drive only the monitors.

Ah, this is interesting. I was wondering why my ui was slowing down.

Well i’m thinking whether I take a xeon 1231v3 that has multithreading or an i5 4690k that has an integrated graphics card and better single core performance, if the “mini graphics card” on board of the i5 4690 could update my screen, while my main graphics card would render that would be perfect.
What is better, openCL oder CUDA?

The refreshing slowdown is allways there even if i render only with one gpu.

Could you watch a stream at 720/1080p fluently? Could you test that for me? :smiley:

Hmm… i just tried disconnecting the two monitors from my gtx while rendering but the slowdown didnt go away.

So an ATI card then?
If so would you still have the same VRAM limiting issue you have when stacking CUDA cards?

I cant even move my mouse cursor without losing it for 3 or so seconds every 3 or so seconds. It is like the system is responsive for 3 seconds and then compltely unresponsive for 3 seconds and then fully responsive for 3 seconds again. You are better of using a mobile phone for web surfing while rendering which is what i am doing at the moment, hence the spelling errors.

That’s completely untrue, at least for Cycles. Near 100% of the actual rendering process (after an initial preparatory step) happens on the GPU.

Ah, thanks that that’s cleared up. :slight_smile:

Thanks for trying, Lumpengnom! Ye, I will do that then :smiley: Could someone tell me if multithreading is actually helpful or not?

You disconnected them while rendering? Try rebooting, I guess. Adding in a second GPU is the common solution to the UI slowdown issue. If it doesn’t work for you: YMMV, I guess.

I cant even move my mouse cursor without losing it for 3 or so seconds every 3 or so seconds. It is like the system is responsive for 3 seconds and then compltely unresponsive for 3 seconds and then fully responsive for 3 seconds again. You are better of using a mobile phone for web surfing while rendering which is what i am doing at the moment, hence the spelling errors.

It’s not nearly as extreme for me. Try reducing your tile sizes?

While you can run both NVIDIA and AMD cards in one system, I can imagine it causing trouble one way or another. I don’t see a reason to get an AMD card for any reason, tbh.

If so would you still have the same VRAM limiting issue you have when stacking CUDA cards?

The VRAM issue is always there, no matter how many GPUs you have. Also, GPUs do not share memory, so their memory does not add up. If you render with multiple different GPUs, the GPU with the least amount of memory will define the upper limit.

Ah, ok, thanks for the info. Reducing the tile size often causes significantly longer render times. I´ve found the best tile size to be around 256 to 350 or so in several tests. Making the tiles smaller, something like 64, has caused up to 50% longer render times for me.
It can be usefull to reduce the tile size and try rendering with the gtx and the quadro but usually the quadro just slows everything down.

Right, that is what I was curious about. If I have a card from a different chip maker and my primary display device, whether onboard or slotted, would the CUDA card(s) be limited by its VRAM. More specifically, with say a 2gb shared memory Intel 4000 driving my monitors would a 4gb GTX 960 be limited to half its VRAM or does Cycles even notice the Intel chip when it builds for rendering?

It only applies to the gpus that are actually rendering.

I had a Quadro 2000 as well in my work machine. I replaced it with a $140 750ti and doubled my render speed. I know that workstation cards are tuned for stability over performance, but using that Quadro was a very disappointing experience. Autocad didn’t even recognize it, even though it was reccomended hardware. Just a bunch of marketing BS.

Agreed. The Quadros are nonsense. At least for 3D Rendering software like Blender I see no advantages. They have a lot of disadvantages for CUDA rendering, though.

Quadro cards are mainly tuned for exceptional profit margins. If you want the raw performance of a comparable consumer GPU, you will have to pay the premium.

However, Quadro GPUs are significantly faster than Geforce GPUs for “participating” applications, if you look at benchmarks like SPECviewperf. It also affects Blender, since NVIDIA considers two-sided shading a professional feature. Also, if you’re doing anything in double-precision floating point (which you probably aren’t), the Quadros aren’t artificially limited (as badly).

I will say that I did feel the double sided shading impact when I switched away from the Quadro. Also, the antialiasing was cleaner and more consistent. Most of my frustration with the card stemmed from it never being recognized by autocad. This is the type of program that this card was designed for and it never worked any better than software rendering. And the ping pong competition of blame between autodesk and Nvidia is enough to make me dizzy.

Just out of in interest, what does that mean in a 3D Program context? What, as negligable as it may be, does the double-precision influence? What does it influence in other software?