Hi all, my proposal for a new benchmark file for Cycles.
It is based on the Cornell Box.
It is boring.
It is GPU only.
My thoughts about:
I would like to test my GPU not my CPU, so low poly and no post processing to keep the CPU usage very low.
You could also use the default cube and render it 10000 samples but this is really boring.
Open file, hit F12, wrote down:
Blender version
OS
GPU
Render Time
My result:
2.68a
Opensuse 12.3/64
GTX 760
02:00.46
Thank you and cheers, mib.
Might i recommend instead of ~1minute on a decent gpu… making it around 3-5 minutes on the gpu? means that when there are future speed changes to cycles… it will be more obvious.
Hi doublebishop, this make sense, I change file in first post to 1000 samples.
This double the render time for me to 2 minutes on the GTX 760 but it is no fun to render on a GT 610.
Forwhy it is nearly 200% rendertime I change posted times to double.
If you like you can edit your posts with a 1000 sample render but I think it is not necessary.
How in the world did Eppo get such a fast score on a 560?
And I don’t understand mib2berlin about your comment with a change to 1000 samples that your test was 2 minutes on the GTX 760, so what were the samples on your first test that produced a time of 59 seconds? This is confusing.
JWise, I am sorry, forgot to edit the rendertime in first post.
The 59 seconds was with 500 samples, I changed first post now to 02:00.46.
Really a bit confusing.
Guys, I’ve been looking at the screen caps above and can’t help but see more noise in those images. I rendered my image with 1000 samples as the file was delivered and see a lot less noise, how many samples did Zajooo use? Please compare:
mib2berlin, I’m so intrigued by how peculiar this is regarding a time of 2 minutes on a GTX 760 with 4GB and a time of less than 2 minutes on the GTX 570 with only 1.28GB. What’s going on NVidia?
BTW: RenderTime with only 500 samples is 00:58.26 here.
Maybe we should ask DingTo if he can help explain this anomaly as he’s fast become a Cycles expert and I believe he’s worked on the OpenGL version control stuff in trunk. Is this Blender optimization issues, or NVidia hardware crippling?
JWise, the GTX 6/700 cards are really optimized for gaming but I am ok with the Cuda performance of my GTX 760.
I bought it because of its 4GB and less power/noise.
It is not faster as my GTX 560Ti 448 Cores (Sounds like helicopter) and I know it before I buy.
If 1.28GB is ok for you I would wait for Maxwell, GTX 800?
I would love 4GB, but that addition combined with less speed for $320 seems to me to be a bad deal. Effectively I would pay over $300 for 2.8GB of RAM but would have to give up a minor amount of rendering power and have a slightly quieter room in which I work.
Regarding Maxwell, how do we know and how long will it take for us to find out if NVidia cripples the GTX 800 series? I certainly am shy about being first mover on that product, heck I might learn that it’s slower than my old GT9800 the way this is going.
I understand the Quadro line for pro’s, but there is no chance that as a developer for things running on UDK being built with Blender that a Quadro is a good solution for me. And don’t tell me I should be running a Quadro and a Titan, I’m trying to self fund my experiments in VR, hence Blender and not 3DS Max!
LordOdin, no I can´t, my mainboard is broken and I could use only one PCIe slot.
And I spend all my money for the GTX 760.
But in earlier tests it is similar to the GTX 760 and a bit slower than a GTX 570.
I think with both cards I get > 1 minute.