RAM per CPU threads, CPU and GPU rendering questions+

I’m thinking about building a 64 bit render farm and I was wondering how much RAM per CPU thread to use? In my workstation I have 16 gb of Ram and 12 threads. Is that too low? Should I jump up to 24 gb? Is 2gb per thread o.k.? Or is it 2gb per physical core? (e.g. my CPU is 6 physical cores but renders with 12 threads.)

Also, I have dual 760 GTXs and I’m constantly running out of Video RAM when 4k-8k textures are involved, or high poly counts. Yet, I looked at the 780, as it is priced the same as my 2 760’s combined, but it only has 3 gb Ram as opposed to the 760 have 2 gb (SLI doesn’t stack RAM that I’m aware of). So I’m losing an extra rendering tile and it seems like the 3 gb ram isn’t enough to push over that 4k-8k texture boundary. It really seems like Titan Black or above or else you are just going to be rendering really small scenes. Sure, you can split the scenes but when you are using Mask Layers and camera layers, it still needs to load those textures to give the correct output. Is that a correct assumption?

Per-thread RAM use is pretty insignificant in CPU mode, you can just count the total system RAM regardless of the number of threads you are using. The threads share one RAM dataset for the really heavy stuff like textures and geometry.

Sure, you can split the scenes but when you are using Mask Layers and camera layers, it still needs to load those textures to give the correct output. Is that a correct assumption?

Yes. In some cases you can get away with rendering things completely separately, but if an object is going to react with light, it needs to be loaded whether it is visible or not.

Hi, the Titan are overpriced.
Look for the GTX 780 6 GB or wait for the new GTX 860/70/80 coming in September/October.
They are announced with 8 GB.

Cheers, mib

I see. Well, that’s the thing. At $500 or more I think that it’s possibly better to go the route of using that $500 on a render farm. 8K is good but it’s not great. I guess it all depends on how efficient OSL is on GPU. OSL is my next thing to learn along with sverchok.

You work with Renderfarms, right? Have you ever built one? I’m trying to find a dynamic chart that gives price comparisons on most efficient CPUs for the cost but I can’t find one. Guess I’m not sure what to type in Google.

You might be asking the wrong questions here . . .

For any “purely CPU-intensive” workload, as 3D rendering most certainly is, the number of threads should correspond to the number of cores or CPUs that are available. (As Blender automatically does.) Setting it to a larger value will actually make things slow down.

Each thread will try to allocate RAM for its own outputs and intermediate values, and there must be enough RAM to enable all of them to do so, and to keep that memory indefinitely, without ever incurring page-faults. You should be able to run jobs on the system that you have right now, e.g. running Blender in command-line mode and using a command such as time to gather statistics. The memory-use quotes will be an exaggeration because memory allocators are always “lazy.” But if you carefully tweak the number-of-threads setting, you can get some idea of how much resources each thread is using.

But the biggest concerns, by far, will be physical … namely, heat. That microprocessor is going to be getting as hot as it can possibly get, and staying that way for many hours. You’ve got to be sure that the system that you buy was truly engineered for that. Most systems aren’t, because they don’t have to be.

If you are using GPU rendering, then the whole situation just utterly and completely changed, because now you have one resource that all of the threads must now share. Which fact virtually eliminates the value of “threading” altogether. Now, the threads are all waiting in line to get their chance to shove some data into the GPU and to push the Start button, waiting for the GPU cycle to finish, and then gathering back the results. A large crowd of people won’t get through a turnstile any faster … no, it will be slower because they’re pushing-and-shoving each other.

OSL doesn’t work with the GPU at all. And it’s unlikely to work on the GPU any time soon as it’s maintained by Sony Pictures Imageworks and they have little to no interest in rendering on the GPU at the moment.