CPU vs GPU rendering Setup

Here’s a simple test I did, with my CPU which is a i7 4790k, using the new Cycles BMW Test (20 samples instead of 100 as I’m using a CPU)

4C/8T at 4.8 GHz: 45.67 seconds
4C/8T at 2.4 GHz: 90.63 seconds

As you can see it rendered 2x as fast with 2x the clock speed.

However, the power consumption does not follow that path:
At 2.4 GHz, the i7 4790k consumed 45W and ran at 35C while
At 4.8 GHz, the i7 4790k consumed 120W and ran at 64
C. which is almost 2.66x the power consumption for 2x performance.

So to overclock the 8 core, you would need really good cooling (Mine is water-cooled) but some quick research shows that 4.4 GHz is fairly normal.

zee could it be your cooler used the extra .66x to keep the temps down because the temps did not double? or is that on a totally different power system? i am assuming active not passive, fans and pumps.

rdo3, it was the TDP of the processor as recorded by Intel XTU, not system power. The pump/fans are manually set to my preferred speed anyhow, they don’t change.
The increased power consumption is also due to the higher voltage the CPU needs to run at 4.8 GHz

There is a fairly easy way to test the kind of performance you will get with the Xeons, try it on an Amazon cloud instance.

The c3.8xlarge instance has 2x 2.8 GHz Intel E5-2680 v2, for a total of 16C/32T and 60GB RAM. It’ll cost $3/hour to trial it on a Windows instance (which is a lot easier to get blender running on thanks to RDP).
The pricing is here: http://aws.amazon.com/ec2/pricing/

Here’s my benchmark comparing the c3.8xlarge instance, using the BMW27 benchmark (CPU’s had 32x Tiles)

2x NVidia GTX 970’s 256x Tiles: 41s
Intel COre i7 4790k @ 4.8 GHz: 193s
2x Xeon E5-2680 v2 @ 2.8 GHz: 96s

If anyone has a heavy scene to test with, I’d love to give it a go. Or that uses any features that may not be so GPU optimized (hair/volumes, large scenes/textures etc…?)

Not sure if the EC2 servers get 100% of the CPU performance, or what kind of a hit there may be using a slightly virtualised machine.

Zeealpal; that first test I understand completely and looks like definitive proof that the i7 OC’d can match the Xeons. But the other one I am unfamiliar with and looks to show the Xeons as rendering in almost exactly half the time of the i7 (kind of counter-arguing the first test). It also shows the 970’s as rendering in under half the time of the Xeons, which is impressive when the costs are considered.

The first test was just showing that a doubling the clock speed results in an exact 2x performance increase, only the i7 was used with the clock speeds manually adjusted.

The second test was showing the difference between the Xeons 16 cores + HT at 2.8 GHz vs i7’s 4 cores with HT at 4.8 GHz.
The Xeons have a cumulative clock speed of 162.8 = 44.8 GHz while
The i7 however has a cumulative clock speed of 4
4.8 = 19.2GHz, which is a viable comparison for multithreaded tasks like rendering.

What I was trying to get at is that a 8 Core + HT i7 5960X Haswell-E clocked at 4.4 GHz, would have a cumulative clock speed of 8*4.4 = 35.2 GHz, which is quite close to the Xeons in rendering performance, will be faster in single threaded areas and costs less than a dual socket Xeon setup, which can be spent getting a better GPU for example.

Finally, that shows that getting the 12GB Titan is going to be more cost effective than any amount of Xeons, provided what you are rendering can fit in the GPU. Cheaper to get Titan X’s and a normal i7 system instead.

Ah, I understand now. Yes that is a good point. Unless a person wants the absolute best CPU performance regardless of the cost, the 8 core i7 is the ideal choice albeit still quite pricey itself. Thanks for the info, it changed my view on what is best for my future build, along with the Xeon mobos limited cooler choices (I only use the best from Noctua) and mobo layout (difficult to access the RAM with big air-coolers installed). Now unless I find proof that the top 6 core i7 is nearly as good as the 8 core (for half the cost), or something else hits the market, my CPU & mobo are set.

Now what the OP thinks is another question…

I have a water cooled i7 5960x in my primary workstation (64gig ram)
I also have 2 titan blacks in that PC, one for screen one to render while working
In no way shape or for can the i7 5960x keep pace with just one titan black let alone the 980s and soo titan X cars I have in my render boxes

The memory limit even of my older cards is easily managed.

I may have missed the point of this thread but my advice to any and all would be go GPU all the way with blender

P.S. I also use houdini and mantra which is damn fast for brute force rendering (4-5 times faster than Cycles CPU)
even mantra on my i7 can not match a single titan black for a similar scene in cycles.

Hi, I’ve done many pc builds over the years but my brother wants me to build him a blender oriented hackintosh for around 3k CAD. I personally have no experience with Blender or any other type of 3d programs so I’ve been trying to figure out what most of the budget should be focused on, CPU vs GPU (like the OP). After reading this thread I’m leaning towards what Sanjiro666 recommends and putting most of the money into multiple GPUs as dual xeons setups don’t seem to be good bang for your buck.

Please correct me if I’m wrong but from the little research i’ve done is that though blender does not support SLI, you can set up blender to render a frame per gpu at the same time? (so 2 cards renders 2 frames simultaneously). With this in mind I was considering getting 2-3 GTX 970s with a i7 4790K.

My question I guess is if this is a good way to go? At that price point it seems the gtx970 gives you the most cuda cores and ram per $. Also this might be very stupid of me to ask, but when a gpu is rendering, can it not be used for display anymore? I ask this as Sanjiro666’s setup mentions using 1 titan used for screen and the other for rendering.

Thanks!