Thanks for the report, angus. If anybody else wants to try and do a proper test: Use a tool such as GPU-Z and push your scene towards 4GB incrementally by adding geometry; measure render times. Look for a big performance drop.
Maybe this isn’t a properly done test, but you should still be grateful, because it’s better than anything the BA community has been able to produce so far (which is nothing). This info is good enough as it is, we don’t need any more “proof”, we need someone else to try and reproduce it. If it can’t be reproduced, then you can seriously doubt these results.
You cant directly compare Cuda core count between 2 different architectures (760 is Kepler 970 is Maxwell).
You can still compare relative render time.
The GTX Titan GTX 780 GTX 780 TI GTX Titan black GTX Titan Z Quadro k6000 Tesla K20 Tesla K20X Tesla K40 all use the same GK110 GPU gimped by nvidia in different ways to sell each one at a different price… If that doesnt make you mad IDK why this little memory problem would.
They sell each one at a different price, with different specs, but in those cases, the specs are correct. You wouldn’t mind if your GPU suddenly drops significantly in performance when you approach 3.5GB, as opposed to the advertised 4GB? Even if it wasn’t a big deal in terms of performance, it’s simply false advertising, which is unacceptable. The performance difference between a 780 and a 780Ti isn’t really a big deal, either. But the 780Ti still costs more more money. If you don’t get why this is a serious problem, you’re either a sucker or a hopeless fanboy.
Yes I also agree its a problem but its something I would expect from nvidia (The whole GK110 for every GPU is when i started to lose my patience with Nvidia) … It may be slower but it does have 4GB so they didnt lie on that part but yes it has less rops and less cache than advertised.
I am definitely not a fan boy im just upset with every tech company there is and eventually just stoped caring because they will screw us with anything they do… If they sell 25 K6000 they can pay 1 employee 100,000 dollars a year (More than likely to people who didnt even help develop the card). They aren’t innovating they are raking in money and AMD is just pumping more power in to their cards to keep up with Nvidia.
The thing people dont keep in min is the memory that cycles reports to you isnt the actual amount that your GPU is currently using. You should definitely monitor it like you said because if you use windows your always using at least 100 and the overhead of the driver interpreting cycles also uses more memory as sergey said in one of my bug reports. So what he is claiming is cycles said it was using over 3.5 GB when you were more than likely using more than 3.5 GB the entire time.
I would do tests but my computer doesnt have enough system RAM to get 4 GB of geo over to my GPUs ill see if I can try with textures
And yes I am deffo a sucker just like anyone else here who bought compute hardware ever.
Well there you go, you pushed memory usage beyond 3.5GB and got drastically worse render time, supporting what angus has found.
Thanks for testing.
Update: One thing to keep in mind: The memory usage reported by GPU-Z may be the real usage, but that doesn’t mean that data can’t be shuffled around if necessary, just like in games between draw calls. The data used in a Cycles scene during a Cycles pass can (to my understanding) not.
What should be tested now is whether the same effect happens when getting close to the memory limit on unaffected GPUs.
I’ve done some preliminary testing on a 780/3GB with a similar scene and I do find a significant degradation at the very edge of 3GB, but not before.
Right, but not all of that memory is in use by Cycles. If some other less important data ends up in the slow segment, that doesn’t have to slow down the render.
It is not so important ,the fact that they lied is bad , but still the GPU is even with only 3.5 + 0.5 GB VRAM still an excellent GPU. Unless Blender takes all 4 GB the Nvidia Driver should make the slow segment unused. Some programm even see the GPU as only having 3.5GB
BTW, I do own a EVGA GTX 970 superclocked with 4 GB. I haven’t really used it much for cycles, so I don’t know how great it performs, but I did notice high-pitched coil whine when using GPU acceleration on a HitFilm plugin (fire) in Sony Vegas.