The new Cyles GPU 2.71 Benchmark

Hi, first spreadsheet, it is not sorted correctly because of format problems.
May one could look into the .ods file, I have no time and tired because of the wrong time format from many posts here.


Cheers, mib

thanks for spreadsheet, but need sort columns by time

mib

I’ve sorted the spreadsheet. It’s in Excel format though, I have converted it o ods, but don’t know how to use it in that form. Anyway here is a screen grab from Excel.


Cheers

Wig

EDIT: Hi Wig,
May I suggest Libreoffice for *.ods files (there is a Windows 32bit version) – Download link:
http://www.libreoffice.org/download/

EDIT2: Hi mib,
I don’t see your own times in the spreadsheet, can you please add them? (unless is there a reason not to)?


Ubuntu 14.04 64bit

GPU1: GTX 580
GPU2: GTX 480
GPU3: GTX 480

all cards, tile 205x205
01:44.50

GTX 480 (2x), default tile
02:41.38

GTX 580, default tile
04:44.69

GTX 480, default tile
05:05.48

After looking on the sheet, I thought my times looked a little too good and wanted to reproduce them. It just diddnt seam right that my 770 and 680 is faster than Wig42’s Ti and 580, but…


I guess they somehow are… :confused:
I pushed the GTX680 a bit and even improved my time, so here is an update for me:

Win 7 and both cards Oc’ed:
02:01.48

The 6 series really got themselves a big boost in 2.71, and I guess thats why I can hang on to the “big boys”?

Hi,

Win 7 Ultimate x64
GTX 570 OC, default tile
05:40.21

It’s default clock of an Asus GTX 570 Direct CUII

Hi all, yes, Fermi time is over since Blender change to Cuda Tookit 6.0.
My GTX 560Ti slow down compare to my GTX 760.
I wonder that GTX 750 are also not so good compare to 2.70.
I made some tests with Cuda Toolkit 6.5 RC and got slightly better performance for 760 but not for 560.
Maxwell cards will profit from 6.5, I propose.
Thanks Wig42 for the sheet, ltpdttcdft I add my times to next sheet. :slight_smile:


Cheers, mib


It looks like 6xx pull slightly ahead of their 5xx predecessors, while 780Ti and Titan Black jockey for the lead.

EDIT:

Hi Wig, maybe try tile 205x205 (9 tiles instead of 4). This should divide the work more evenly since GTX 780Ti is basically twice as fast as GTX 580.

Hi again,

Win 7 x64
GTX 680 OC, default tile
04:15.12

It’s an POV GTX 680 Ultra Charged with 4 GB VRAM

17:06.00

MacOSX 10.9
iMac 27’’
NVIDIA GeForce GT 755M 1024MB

I’d like to buy a iMac to upgrade my old PC hardware. I am worried about the render time if the iMac does not have a NVidia Cuda. Will I lose performance on renders? Anyone here can indicate a comparison of render times using Mac and other kind of hardware? recommendatiosn?

hi

Win 7 x64
4x Tesla K20x, default tile, simply press F12
01:15.74

Win 7 x64
1x Tesla K20x, default tile, simply press F12
02:52.03

Win 7 x64
5x Tesla K20x, Tile 128*128, simply press F12
00:48:88

Hi Tio Ilmo, you need to buy a 27" Imac with GTX 780m to get 50% performance of a real GTX 780 and you need many $$$ for it.
For this money you could buy a very good PC and two GTX 780.
These Imacs are cool but have bad prize/performance ratio on CPU too.

Cheers, mib

Windows 7 64bit.

Blender 2.71. Hash 9337574.

EVGA GTX 780 3GB

02:49.66

You wont get high performance by an iMAC. iMacs are low energy consumption computers, they use laptop components . They consume a 3rd-4th of an average dekstop pc and that shows in their performance.

GPU as you can see from my stats and my iMac is the latest model , I get around 17x times less performace than a dual Titan which of course cost as much as an iMac by themselves and around 4 - 3 times less performace than an average modern dekstop GPU.

Personally I love macs , macos and especially the iMac monitor. I knew an iMac would not give me a competitive 3d render computer before I made the purchase. I can’t see myself going back to pcs after 7 years of owning iMacs.

If you decide to go for an iMac without an NVIDIA expect a even more 2-3 times slower rendering because cycles will use your CPU instead.

of course faster rendering times does not mean better graphics / better art :wink:

GPU: GTX 660
OS: Win7 64 Family premium

EDIT: Well I didn’t see the big bold text in the first post. They’re there anyway …

2x2 tiles: 06:36.88
http://www.pasteall.org/pic/show.php?id=74372

Progressive: 07:20.08
http://www.pasteall.org/pic/show.php?id=74373

Two new cards to add to the pile!

OSX 10.9.2

GTX 760: 5:22:36

GTX 690: 4:22:65

Both: 2:28:29

Side note—sorry if this is off topic, but Blender is only using 1 tile at a time with the 690 (I just installed it today!). Shouldn’t it render 2 at a time? My understanding is that OSX does not honor the internal SLI of the 690 so it should function as 2 separate cards. I don’t want to distract from this thread and I’m new here so if I need to ask this somewhere else please direct me.

Windows 7 x64
Gainward GTX 780 6Gb
02:53.54

Just bought a 750TI for private use:

Windows 7 Professional 64bit
Geforce GTX 750TI
07:32.47

I like to thank you for create this thread. I was looking for a solution to make faster rendering on cycle render. I have an ATI graph card and remembered that I read about CUDA few years ago. I discovered that ATI is no good on Blender and found this thread. I checked out the benchmark list and bought geforce GTX 780. I am glad the list is exist, so keep it up!

My result is here:
windows 7 professional x64
Geforce GTX 780
02:53.78