[Obsolete] New Cycles Benchmark

Hi all, my proposal for a new benchmark file for Cycles.
It is based on the Cornell Box.
It is boring. :stuck_out_tongue:
It is GPU only.
My thoughts about:
I would like to test my GPU not my CPU, so low poly and no post processing to keep the CPU usage very low.
You could also use the default cube and render it 10000 samples but this is really boring.

Open file, hit F12, wrote down:
Blender version
OS
GPU
Render Time

My result:
2.68a
Opensuse 12.3/64
GTX 760
02:00.46
Thank you and cheers, mib.

Attachments


cycles_cornell_bench.blend (1.59 MB)

First is F12 only, so it like dual GTX 590
Second tiles set to 180x180, to have 9tiles and utilize all the GPUs



OS:
Windows 7 64bit

System:
MB Asus Rampage III Extreme
Intel i7-920 @ 2.66MHz
6GB RAM DDR3 in triple channel (3x2GB)
1500W + 1200W PSU

Graphic cards:
NVidia GeForce GTX 580 + 3x 590

blender version; Buildbot r59147 64bit
Gentoo Linux
3 x NVidia GTX570
00:27.28

For updated file. (1000 samples)
00:58.27

[update]

blender ver; 2.68a (downloaded from blender.org)
Gentoo Linux
2 x NVidia GTX570 (sniff. One died… :frowning: )
01:01.18

Not a lot of difference. Does seem to support somewhat the idea that it doesn’t scale that well past two GPU.

later,

Hi mib2berlin!
Here you go with one a bit more green - just 600W :smiley:

r59169
Mint Linux-x86_64, NVIDA 304.88
GeForce GTX 560
01:39.93

Thank you all, I start to build a spreadsheet.
@eppo, is it 560 or 560Ti?

Cheers, mib.

Might i recommend instead of ~1minute on a decent gpu… making it around 3-5 minutes on the gpu? means that when there are future speed changes to cycles… it will be more obvious.

Hi doublebishop, this make sense, I change file in first post to 1000 samples.
This double the render time for me to 2 minutes on the GTX 760 but it is no fun to render on a GT 610. :slight_smile:

Forwhy it is nearly 200% rendertime I change posted times to double.
If you like you can edit your posts with a 1000 sample render but I think it is not necessary.

Thanks, mib.

Like it says, 560. No Ti, barebones only.

2.68a r59223
Win8 64bit
GTX 570
01:56.17

How in the world did Eppo get such a fast score on a 560?

And I don’t understand mib2berlin about your comment with a change to 1000 samples that your test was 2 minutes on the GTX 760, so what were the samples on your first test that produced a time of 59 seconds? This is confusing.

JWise, I am sorry, forgot to edit the rendertime in first post.
The 59 seconds was with 500 samples, I changed first post now to 02:00.46.
Really a bit confusing. :expressionless:

Cheers, mib.

Blender version : 2.68.2
OS : Windows 8
GPU : 580 3GB, 580 3GB, 580 3GB
Render Time : 00:39.92

Guys, I’ve been looking at the screen caps above and can’t help but see more noise in those images. I rendered my image with 1000 samples as the file was delivered and see a lot less noise, how many samples did Zajooo use? Please compare:


Open file, hit F12. Linux is faster in doing this, that’s all. First test was 500 samples.

For 1000 i have 03:21.51
Edit: here’s the 1:1 screengrab of the 1000 sample render http://www.pasteall.org/pic/57674

Edit2: There should be a reason why some prefer not to use Cycles. Guess, now i know for sure why:

AMD Athlonâ„¢ 64 X2 Dual Core Processor 3800+ @ 2009.38MHz, Tiles set to 64, threads fixed =3, 1000 passes: time 48:03.51

For the protocol: images do not differ by one pixel.

mib2berlin, I’m so intrigued by how peculiar this is regarding a time of 2 minutes on a GTX 760 with 4GB and a time of less than 2 minutes on the GTX 570 with only 1.28GB. What’s going on NVidia?

BTW: RenderTime with only 500 samples is 00:58.26 here.

Maybe we should ask DingTo if he can help explain this anomaly as he’s fast become a Cycles expert and I believe he’s worked on the OpenGL version control stuff in trunk. Is this Blender optimization issues, or NVidia hardware crippling?

JWise, the GTX 6/700 cards are really optimized for gaming but I am ok with the Cuda performance of my GTX 760.
I bought it because of its 4GB and less power/noise.
It is not faster as my GTX 560Ti 448 Cores (Sounds like helicopter) and I know it before I buy.
If 1.28GB is ok for you I would wait for Maxwell, GTX 800?

Thank you all for testing, mib.

I would love 4GB, but that addition combined with less speed for $320 seems to me to be a bad deal. Effectively I would pay over $300 for 2.8GB of RAM but would have to give up a minor amount of rendering power and have a slightly quieter room in which I work.

Regarding Maxwell, how do we know and how long will it take for us to find out if NVidia cripples the GTX 800 series? I certainly am shy about being first mover on that product, heck I might learn that it’s slower than my old GT9800 the way this is going.

I understand the Quadro line for pro’s, but there is no chance that as a developer for things running on UDK being built with Blender that a Quadro is a good solution for me. And don’t tell me I should be running a Quadro and a Titan, I’m trying to self fund my experiments in VR, hence Blender and not 3DS Max!

blender 2.68a
windows 7 ultimate
gtx 560 ti + gtx 460 v2
1:33.03


can you run the test with both of those cards
i wana see the speed :smiley:

and the gtx 560 ti 448 core is more like a gtx 570 than a gtx 560

LordOdin, no I can´t, my mainboard is broken and I could use only one PCIe slot.
And I spend all my money for the GTX 760. :smiley:
But in earlier tests it is similar to the GTX 760 and a bit slower than a GTX 570.
I think with both cards I get > 1 minute.

Thanks for testing, mib.