CPU is faster than GPU

Don’t know what’s wrong with it… I have this scene rendered at 100AA samples:
GPU render: >11 mins


Frame: 1032
Tile size: 256x256 (for both devices)
This is just a small example I take into this post. I’ve tried tweaking the settings including tile sizes, experencing on THREE different computers (2 with Ndivia, one with AMD). The results were pretty the same (either GPU faster for a small fraction or horribly slower than CPU).

   The image above is rendered in:

Blender ver 2.75
Computer: 4GB ram, intel core i7 5500U 2.4ghz
Graphic card: Ndivia geforce 930m

Getting so much frustrated, I decide to get help from community!
Is there something wrong with the file or with me or with something else?..
Or… is that all what my computers can do?

Blender file: http://www.pasteall.org/blend/39349
All images are packed!
Oh and Motion blur is on for render!

Rule of thumb:
CPU = small tile sizes (e. g. 32 x 32),
GPU = large tile sizes (e. g. 256 x 256).

A low range GPU (or even worse: a notebook GPU) will give you no render time benefit over a reasonably modern CPU.

you know. I’m aware of that. 18x18, 32x32, 128x128, 256x256. I’ve tried them all. So… it’s not the case.

Well, in your first post you wrote about using a tile size of 256 x 256 for both devices

And btw, did you read the rest of my post as well?
A notebook GPU like your 930M (the M stands for “mobile”) or any low to mid range desktop GPU will not be faster than a powerful CPU. That’s just how it is.

And unless you post very specifically which CPUs and GPUs you compared against each other in the other PCs involved, there is no reason to assume something fishy with your setup.

BTW, using “Progressive Refine” for the final render usually is not a good idea, as it’s (from my experience) slower and uses significantly more memory than tile rendering. And using “Progressive Refine” is of course also the reason why you see no performance difference with different tile sizes…

The 930 is probably one of the worst GPUs you can have for GPU rendering. It´s probably best to replace it with something decent like a 970.

It renders in 3 minutes on my GPU (GTX 670) and in 3:20 on my CPU (2600K). You´re right the difference isn´t that drastic.

Without MotionBlur it´s 1:35 on CPU and only 35 seconds on GPU. Seems like MotionBlur eats up some of the GPUs advantages.

I noticed that you had “progressive refine” checked. If you use progressive refine the tilesize is irrelevant and everything takes longer. For final renders you shouldn not use prorressive refine.

Progressive refine… Never knew of it before… Thanks dude!
Spent all day changing the tile sizes…

Another thing added to what has already told you Lumpengnom
If you want SSS on GPU, you must enable “Experimental” under “Feature Set” in Render tab. So perhaps it will be even somewhat slower in GPU with SSS.

PS: I knew many things that slow down GPU, motion blur was not one of them. So, added to the list
I really need to start researching on how to build a CPU farm.

970 and 930 are so close. Will it make any difference?
Thanksss for your test on your devices. It’s really valuable to me.
I shouldnt have turned off Motion blur :frowning:
Should I turn it off now? Will that make the motion strange?
And uhmm another noob question: as you said, you have gtx 670. Isnt it supposed to be slower than 930? since 670 is lower than 930 in number?

Oh SSS too? Rendering with GPU is not as easy as I thought…

Something that has nothing to do with speed, but I see you are making an animation. Under “Sampling” in “settings”, you select the button to the right of “Seed” field to get random noise values. Otherwise, a static noise pattern can be annoying in a animation.

Oh god you should’ve told me sooner!!!
Aizz lazy to post blend files is not good at all. Upload it and folks would be able to help you more.
So much for first time final-rendering an animation…
Thank you!