NVIDIA GTX 970 False Advertising (UPDATE: NVIDIA settles class action lawsuit)

@nSone

If you’re rendering a scene on a 3GB 780/Ti and go beyond 3GB, it will simply not render at all. Now, if you go beyond 3.5GB on 970, the worst thing that could happen is it’s going to become slower. “Slower” is still significantly faster than “it doesn’t work”. If you stay below 3.5GB, this issue should not affect you at all.

The 760 doesn’t have good price/performance and the 750Ti only has 2GB of RAM, which would be way more limiting than even 3.5GB.

If you can’t get a 780 (preferably with 6GB) at a really good price, the 970 is still the best option.

@BeerBaron thanks!
I’ll just wait a few weeks more in case some other info on the 970 pops out and probably stick to my initial plan

Pretty true observation. While false statement 3 times is not just an accident but still for the price a 3.5 GB card is pretty great. My GTX 570 was later unable to often to work. …

And over a year the saved power consumption of the GTX 970 is great!
To be honest if my PSU would have enough power I would put a second 970 into the PC.

thanks guys I really appreciate your advice and I hope I’ll be finally able to get into Blender and share some of my own work with you soon :slight_smile: see u around!

Actually, they’ve been raking in millions as of recent from the sales of the Xbox One, PS4, and Wii U (all of them use AMD hardware).

They’re behind, but luckily they are in a situation where they can afford the R&D needed to catch up with Nvidia (if they really care about staying in business).

how we could test if the issue is an issue for rendering is by doing the following. have a scene that uses less than 3.5 gb of ram and one that uses more than 3.5 and less than 4. render both scenes on a 980 and a 970 and compare the drop between the cards. Eg. the 980 will take 5% more time, how many percents will the 970 take more.

i dont have either the 980 nor the 970 so i cant test this :). but sb of you could maybe?

Bold statement there. In compute intensive tasks like Cycles the GTX 970 and GTX 980 consume just a bit less than the past gen cards.

Maxwell brought a nice efficience jump while idleing and gaming, not so much, or nothing at all, for compute intensive loads.

Ah do you CUDA render 100% each day?

Excuse me good sir but didn’t you bring “saved power consumption” to the table? The difference with almost any other card at idle is 10W max.

Let’s math.

10W24h360d=87650 W/h or 87.65kW/h a year.
87.65kW/h*$/€0.2kW/h=$17.5 saved in a whole year.

Take in mind too that’s the MAXIMUM you will save a year. If your previous card was a 600 or 700 series the savings would be negligible.

Power consumption is the last thing on my mind when choosing a GPU, there are price and performance variables much more important, but lately I’m not doing well with those too…
Reading latest reports on the 970 it looks like it just turns to system RAM when over 3.5GB and it’s probably driver dependent, so that brings some more reliability issues since someone needs to provide those fixes per application etc. and it draws me just further away from buying it now, and just a week ago I thought it was the best choice ever.
980 looks great but that price is just way over for me at the time, seems more likely I’ll have to play that stupid waiting game a few more weeks now.

they use maxwell teslas in heavy compute environments with hundreds and thousands of nodes so power efficiency is a factor when you talk big.

not a 1 or 2 lonely cards in your workstation. besides geforce line is for gaming so it features less than optimal componentry as Tesla or Quadro.

gtx 960 will have 4gb variant also.

Thats interesting how well does Blender work with Tesla Cards?

as good as with geforce cards i guess. cycles gpu compute doesn’t use double precision memory so it’s pointless to use tesla except if you render 24/7 for clients such as rederfarms.

Curently there are only Kepler based TESLAS.

There are not Maxwell Teslas yet.

Why does ppl have to make things up?

Interestingly, the 970 gpu is the temperature.
Also I do not understand much that is when the nvidia forum say “3.5gb + 1GB”.
Now generate a render (triangles = 13801691), the dragon of LuxRender.


Hi Cekuhnen,

The best render / comparison test I found was using the GTX 970 and a 4GB GTX 760 both of which we purchased recently for some render boxes on exactly the same scene. The 4gb GTX 760 in our experience renders two and a half times faster on any scene where your memory usage is approaching the 3.5 gb level. We purchased a standard 4gb GTX 760, not a ti version. Nvidia have now discontinued the GTX 760 in Australia.

On lighter scenes or scenes where we have just a few models where the scene is under 3.5gb of video memory consumption then the 4gb GTX 970 renders much faster than the 4gb GTX 760, in fact nearly twice as fast. However this situation reverses once you hit 3.5 gb.

So as soon as you build a detailed character, rig it, place it in an environment of any detail etc. it’s quite likely you’ll exceed 3.5 gb memory usage and then the GTX 970 will run slower than the 4gb 760. I’m working on a music video project with characters, character animation and space scenes and we’ve optimised the scenes as much as possibly however nearly all our scenes do tip over the 3.5 gb memory usage.

So in actual render times on for example the scene we had rendering last night - the GTX 760 will render the scene in 3.5 minutes because it has a true 4gb limit and the card performance doesn’t clip, and the GTX 970 will dramatically reduce in performance over 3.5gb and it takes nearly two and half times longer i.e between 8-9 minutes to render the exact same scene.

So 3.5 minutes per frame for a complex scene render on the GTX 760 and 8-9 minutes for the exact same scene and frame on the GTX 970.

So our experience of this is that the more expensive card with more CUDA cores the (4gb GTX 970) takes more than two and half times longer to render the same frame as the 4GB GTX 760 and is a complete dog when it comes to rendering any scene with a moderate level of detail and texture usage in it.

There’s a petition at change.org signed by over 10,000 Nvidia customers highlighting Nvidia’s dishonesty and fraud in the marketing of the GTX 970 as a full 4GB Vram graphics card. It isn’t. Nvidia deliberately downgraded the Vram on the card to compromise it so that the GTX 970 performance wouldn’t match the GTX 980 and their whole campaign on the GTX 970 has been false advertising throughout.

https://www.change.org/p/nvidia-refund-for-gtx-970

Shocking interview with Nvidia Engineer about the 970 fiasco

In terms of how many 4k textures it takes in your scene before this becomes an issue and notwithstanding all the other factors like scene geo, rigs etc, the scenes we have rendered these tests on all involve a 4k scene background, two to three 4k maps on built scene geometry and then about 4-5 4k maps on character and foreground elements. We’ve tried where possible to utilise 2k textures, however when you consider a 3D workflow where you are trying to keep scene geometry low and therefore using normal and displacement maps to achieve high quality looking geo with low poly counts, then when you consider a character that’s detailed where you want to keep the scene geometry low may only be 15 thousand poly’s but may have a 4k displacement map, a 4k normal map, a 4k color map, and possibly a selection of other maps (specular, ao etc) setup at 2k to try and bring down texture usage, then it doesn’t take many image maps on one character alone to see where having a 3.5 gb limitation on what is sold as a 4gb card becomes an issue.

A couple of people on this threads have dismissed this saying “it’s a first world problem, who cares” or “who uses that much Vram anyhow”. Those comments doesn’t address the issue. The point is Nvidia advertised and misled it’s customers to purchase what is supposed 4GB VRAM headroom. This problem affects many many people who do use and need the full 4gb Vram, from gamers to 3D artists. Hence more than 10,000 Nvidia customers now signing the petition.

If anyone considering buying a card for 3D work reads this, you will probably be frustrated with this card when you build a scene of any complexity and it’s definitely better to either opt up for a 980 or find a GTX 760 Ti on ebay. Even the standard GTX 760 4gb version (not GTI) will outperform the GTX 970 by a huge margin on any scene of any complexity. i.e If you are serious about 3d work and need a card that renders well, avoid the GTX 970.

so if the lower ram speed messes up the GPU NVIDIA CUDA calculation then this is a serious fuck up.

Thanks for doing and posting your test. Its results speak for themselves. contrary to common beliefs, the issue is not negligible at all it seems.

Well less power = less heat which = less noise which = happier artists xD

If you dont have images to back up what you are saying how can we believe you. You didnt list any other details about your “render boxes” You didnt tell us how you did the tests at all… Anyone can say they did something but you need proof and the knowledge of whats actually going on before you can just claim things like this.

Also Rigs dont effect render memory.
You cant directly compare Cuda core count between 2 different architectures (760 is Kepler 970 is Maxwell).

OMG the nvidia developer also works for bungie :open_mouth:

The GTX Titan GTX 780 GTX 780 TI GTX Titan black GTX Titan Z Quadro k6000 Tesla K20 Tesla K20X Tesla K40 all use the same GK110 GPU gimped by nvidia in different ways to sell each one at a different price… If that doesnt make you mad IDK why this little memory problem would.