What is the best graphic card today for blender?

think …?.. Jens agrees with me :frowning: what you get can be confusing …guess its how many weeks on beans on toast :wink:
I love the speed I see from my 970 while messing with a shader , I’ve no worries sending the blend to my older comp to render while I do sumat else, read lots of reviews …check out the benchmarks (on here ) narrow it down to two then toss a coin {good luck}

@Jens I spent a yr or so in Hamberg in the early 70’s …is it still as beautiful as my memory ??

Here are some benchmarks for the EVGA Geforece GTX Titan Z, Titan X, and GTX 980Ti (Superclocked). As well as a comparison between the x99 and z97 chipset I couldn’t find these anywhere hope it helps.

-CPU i7-4770K overclocked @4.5ghz
-ASUS ROG MAXIMUS VII HERO LGA1150 DDR3 M.2 SATA 6Gb/s USB 3.0 Intel Z97 ATX Motherboard
-16GB (2x8) DDR3 Corsair Vengeance Pro @ 1600
-EVGA GeForce GTX TITAN Z 12GB GDDR5, 768bit, Dual-Link DVI-I, DVI-D, HDMI,DP, SLI Ready Graphics Card 12G-P4-3990-KR
-EVGA GeForce GTX TITAN X 12GB GDDR5 384bit, PCI-E 3.0 DVI-I, 3 x DP, HDMI, SLI, HDCP, G-SYNC Ready Graphics Card 12G-P4-2990-KR
-EVGA GeForce GTX 980 Ti ACX SC+ ACX 2.0+ Graphics Card with Backplate 06G-P4-4995-KR
-Windows 7 Ultimate 64bit | Blender 2.75a

RESULTS

auto tile size addon 240x180 - Cache BVH, Persistent Images, and Use Spatial Splits all OFF

Time: 26.52 (GeForce GTX 980 Ti + Titan Z (2x) - CUDA)
Time: 29.99 (GeForce GTX TITAN X + Z (2x) - CUDA)
Time: 40.43 (GeForce GTX TITAN Z (2x) - CUDA) This is one card but has (2x)
Time: 58.95 (GeForce GTX 980 Ti - CUDA)
Time: 59.47 (GeForce GTX TITAN X - CUDA)
Time: 4:30.27 (CPU)

auto tile size addon 480x270

Time: 25.38 (GeForce GTX 980 Ti + Titan Z (2x) - CUDA)
Time: 26.10 (GeForce GTX TITAN X + Z (2x) - CUDA)
Time: 37.74 (GeForce GTX TITAN Z (2x) - CUDA)
Time: 50.17 (GeForce GTX 980 Ti - CUDA)
Time: 50.18 (GeForce GTX TITAN X - CUDA)

auto tile size addon 960x540

Time: 29.37 (GeForce GTX TITAN X + Z (2x) - CUDA)
Time: 39.71 (GeForce GTX 980 Ti + Titan Z (2x) - CUDA)
Time: 41.89 (GeForce GTX TITAN Z (2x) - CUDA)
Time: 50.95 (GeForce GTX TITAN X - CUDA)
Time: 51.31 (GeForce GTX 980 Ti - CUDA)

X99 SYSTEM

CPU i7-5960X overclocked @4.5ghz
ASUS X99 Deluxe
32GB (2x8) DDR4 Corsair Vengeance LPX @ 2667
Windows 7 Ultimate 64bit | Blender 2.75a
SAME GPU’s

auto tile size addon 240x180 - Cache BVH, Persistent Images, and Use Spatial Splits all OFF

Time: 27.15 (GeForce GTX TITAN X + Z (2x) - CUDA)
Time: 39.43 (GeForce GTX TITAN Z (2x) - CUDA) This is one card but has (2x)
Time: 1:00.31 (GeForce GTX TITAN X - CUDA)
Time: 2:23.63 (CPU)

auto tile size addon 480x270
Time: 28.97 (GeForce GTX TITAN X + Z (2x) - CUDA)
Time: 51.43 (GeForce GTX TITAN X - CUDA)

auto tile size addon 960x540
Time: 32.46 (GeForce GTX TITAN X + Z (2x) - CUDA)
Time: 55.97 (GeForce GTX TITAN X - CUDA)

So despite what gamers say about the Titan Z it was the fastest GPU by about 13 seconds over the Titan X and the 980 ti, which were basically the same speed.

Will 2 x GTX 970 give me faster render speed than 1 x 980 ti? I can right now only buy one 980 ti or for the same price 2 x 970

How exactly does using multiple graphics cards work? Can you just add in as many graphics cards as will fit, and their performance gets added up? Or is there a maximum, or certain things you need to watch out for like compatibility etc., to avoid loss of performance?
And where in the description of a motherboard do you see how many graphics cards fit into it?

You need to have pci express slots to insert new video cards in a mother board. edit- image was in another post, here is the image.


If you use same brand graphic card without any sli they will render together for blender. So say if you have there Nvdiia cards installed it will use any one of the nvidia card or use all three.

Now I didn’t bothered about it before in this thread but some one on the cg cookie told me that geting an AMD is better because then I can select only the AMD version in blender preferences and keep the nvdia quadro separate just for display. If I buy all Nvidia x it will club it all together. So my my quadro might actually slow the render down when clubbed with two 970’s or 1 x 980ti.

Anyone has any experience with AMD ATI R9 390 ? I heard AMD was causing problems with blender but is working fine now with the new version?

edit: After doing some research I find R390 covers space of 3 slots and consumes more electricity than 970 or 980 ti. So it much better for performance but not if you are building a render farm.

I wish there was a Blender rating system for video cards. I have the Geforce GTX 980 and it doesn’t render cycles in either 2.79b or the newest version of 2.80. A rating system by the Blender Institute would avoid the user wasting his hard earned cash on tech that doesn’t work.

GTX 980 perfectly works in Blender, Cycles and Eevee. You open a new thread in technical support section commenting about your problem and mentioning the OS that you have.

I’m sorry to say you are wrong. I have a twin Xeon 2670 CPU arrangement with 32 GB of RAM running Win 10 Enterprise and said video card. It crashes every time.

Something is not working properly on your end, maybe use DDU to unistall the drivers and reinstall. My 600 series mobile card works fine.

And remember that 2.80 is still in alpha.

If you do not open a new thread explaining your problem and giving details of your OS, driver versions and your hardware, we can not help you much.

1 Like

No more than you are. It works for others:

Redo your drivers as suggested.

Edit: Another thing is to troubleshoot from a basic / low memory scene. CUDA out of memory shouldn’t cause crashes, but just in case.

What ? Why reviving a post from October 2017 ?
The problem is long gone, but thanks. :+1:

Your revived post was evidence against the statement that a 980 doesn’t work. Nothing more, or less. :v:

1 Like

Wow Im blind. Totally overlooked the context here, sry LoboTommy! (long day)

And to support your statement: Im using Win10 + GTX980 both at home and in the office.

So theres something else wrong @The_Diver.
If you are willing to share more details, users sure would be willing to help! :slight_smile:

Just one GTX980? updated drivers?
Win+R => dxdiag => display => alt + print (or “save all Information…”) => paste here

From “experience” the MSI AMD Radeon RX 580 8GB is pretty good.

Example:
I have an HP dl380 G6 Server equipped with an Nvidia GTX 1050 ti 4GB (ASUS, unplugged), and in a head to head render with same settings on both systems the RX 580 is up to 4 times faster than the GTX 1050ti.
It first looked like the Nvidia would be only half as fast (at 15%) but then the Radeon just rolled over the Nvidia… But to be fair the RX 580 is a little bit newer than the GTX 1050ti.