Nvidia Titan X, 12 GB VRAM

Be careful when you purchase an Nvidia card based on what they say the specs will be. Nvidia misled it’s customers with false marketing claims that the GTX 970 had 4 Gigabytes VRAM. It really didn’t. I would be very skeptical about any claim Nvidia makes about their graphics cards now.

GTX 970 deffo has 4 GB of ram

Advertising the GTX970 as having 4GB of RAM isn’t wrong. It’s misleading, because you wouldn’t expect RAM segments to run at significantly different speeds, but it’s not technically wrong. However, advertising it as having 4GB at 224GB/s is wrong, and NVIDIA is still doing that. Apparently NVIDIA hasn’t changed their mind about that, maybe a class action lawsuit will have their mind changed for them.

You can already get a partial refund for them, they’ve already admitted that it’s a problem.

I haven’t heard anything official about partial refunds. Some sellers voluntarily offered returns. Apparently, certain large sellers also offer partial refunds, some in the form of gift cards.

they’ve already admitted that it’s a problem.

They’ve admitted it’s a “problem of communication”. They certainly didn’t correct their specs.

Well that is the right theoretical bandwidth. Maybe they should say “up to” like the ISPs do :wink:

No, it’s the wrong theoretical bandwidth. The “theory” behind it is:
memclock*memwidth = peak bandwidth

So, for a 256bit interface at 7Ghz:
7Ghz * 256bit = 1792Gbit/s
(or 224Gbyte/s)

I know some publications will do some very creative accounting to tell you it is ‘theoretically correct’ that when 3.5GB are on 224-bits and 0.5GB are on 32-bits, it checks out to 256bit even though it is physically impossible to use both at the same time. It doesn’t make sense, not even “in theory”.

The TITAN BLACK has 2880 cores, so 3072 isn’t a big jump at all, core-wise.
Its rumored it going to sell for 999 US$ now. We’ll see if its true…
It should be 1 GPU with true 12 GB VRAM (hopefully all true VRAM…).

Its said that its about 90% of the speed of a TITAN Z (dual TITAN BLACK).
The bigger VRAM might make a big point here.

I just found that current builds of Blender/Cycles don’t produce much different results on 580s vs. TITANs vs. 980s etc. - so while some have 5 or more times more cores, they are just not even 50% faster. I think there is tremendous room for improvement on the GPU multithreading code and/or drivers. Maybe its better to ignore the drivers multithreading and try building one of your own…?

Best
Axel

true, but thats, 2880 kepler cores versus 3072 maxwell cores.
maxwells ard 50% more efficient, not only performance but hopefully it will be a cold running card, compare to fermi.

but the big thing, or major thing is 12gb of vram. thats on the level of tesla cards.

imagine a titan black with 4000+ cores.

this is highly theoretical. but from the 980/970 its been derived that the maxwell arch is ~50% better in performance than kepler.

anyways lets see, its a week left. just saw on fb a swedish site got their titan x and they usually testrun blender for cuda performance.

Whats the name of that site? :smiley:

1350$ would be very low for a 12 GB card. Alone only the VRAM modules cost a bucket load of cash.
If they place it around 1350$ it also would be too close to the price of some 980’s. And they want to make max profit. And people are willing to pay quite a lot for 12gb (full 12gb not in SLI).
So they are gonna push the price to a least 1500$. But it might fall to a 1200 by 3 to 6 months.

lordodin you should know :wink: odin and all.
http://www.sweclockers.com/

swedish site but i think you can use google translate.

they are sayin that the usual friday vodcast is postponed, one of the dudes got an important call, be back next week. 17-20 march nvidia has their gdc, after that I suspect benchmarks to be public.

Hipster Odin only speaks english lol.

ah don’t be sad hipster odin, lots of the words in english are directly casued by original odin followers :stuck_out_tongue_winking_eye: like GIVE and TAKE. a bit off-topic.

anyways, tomorrow the GPU Conference starts. I suspect after that the 20th, sites will be able to start publishing their test of Titan X.

Also I highly suspect the “promised” 8GB versions of 980s will start to emerge. But man do I need this card, been saving up all winter. My 580s are overheating and causing hard shutdowns when rendering intense scenes.

Watching the live event on ustream, Jen-Hsun mentioned that TitanZ was faster in Double Precision than TitanX. So it’s crippled there, but SP hopefully it’s gonna be fast.

Question is, does Cycles use DP? I think not, but maybe someone knows.

it seems almost twice as fast as Titan Black using cuDNN. Guess its about network GPU-ing.

It’s gonna cost $999

They announced it’s $999, any reason this shouldn’t be the top of my list when I (eventually, maybe late this year) build a new rig? 12GB = big scenes, right now I’m on 1.25GB and I can’t even use the experimental kernel.

Cycles does not use double precision.

Just joined the stream at deep learning and self driving cars… between this and the next Linux release being 4.0, Skynet is not that far off :stuck_out_tongue:


(sry, off-topic)

There is here http://www.anandtech.com/show/9059/the-nvidia-geforce-gtx-titan-x-review/15 a compute benchmark of the Titan X at 999$. hope it will be helpfull