GeForce GTX 980 & 970 & Lunar Landing Conspiracies with Maxwell VXGI

not the whole system, the card “adds” the 400 watt to whatever else in the system “total around 600 watt” , so if you have 4 GTX 980 vs 4 GTX Titan “or 780…” , you are pretty much consuming less 500 watt in total “like 1.5 less cards consumption!! , so 1500 watt power supply may be able to handle 4 GTX 980, needs testing”

I think they forgot something from that moon landing scene - Earth. If you land to the visible side of moon then Earth should be in the sky all the time, right? Even if only some of it was lit Earth is quite reflective planet. So they didn’t add a secondary light source which also was not in the studio shootings of “real” moon landings.

I think there must be something wrong with how you’ve calculated the power consumption for your graphics card, the the AMD 295X2 consumes just under 450W at full load and that a dual GPU card that requires water cooling.

is this 450W in games? or in GPU rendering? “like LuxRender”

I got the number from tom’s hardware. They used GUIminer (2 instance, one for each GPU) and measured it for 120 seconds.

Link: http://www.tomshardware.com/reviews/radeon-r9-295x2-review-benchmark-performance,3799-15.html

this may need testing with a render engine “a must” , as memory reads/writes , divergence, etc… in render engine is quite high, where the real stress appear on GPU

I was browsing octane forum today, in one thread folks were raging about poor performance of maxwell cards and came to conclusion that the only sensible thing to do is buy 780 tis when they are still available but in other thread someone was writing that devs from redshift found out that poor performance was caused by bad drivers. Allow me to quote the quote from that thread:

As some of you might have read elsewhere on these forums, Redshift isn’t currently running well on GTX970/980. The good news is that we know why that is and are currently testing a fix.

"The reason for the slowdown appears to be driver-related (with a possibility that the Windows driver model is responsible) but thankfully we know how to bypass it. We’ll be contacting NVidia with our findings but, given that we might not receive a helpful answer from them, we’ll most likely go ahead with the fix - if we detect these videocards. If you’re curious, the issue has to do with the driver-side memory management.

So how does the GTX970 perform after the fix?

Well, testing with “ray reserved memory” left at zero (the default), reveals that the new GPUs are indeed as fast as they claim to be. When compared against an artificially-memory-limited K6000 (it normally has 12GB, we limit it to 4GB to make it ‘equal’ to the 970), the GTX970 is actually running faster! It’s not faster across the board: certain aspects of the code run faster on the K6000 due to its superior memory bandwidth and more cores but, thankfully, there is enough code in Redshift that does care about Maxwell’s architectural improvements. So the final result is indeed favorable for Maxwell. The gains we see so far range between 10% and 20% which are not earth-shattering but considering that we’re talking about a videocard that costs around $330-$380… it’s pretty impressive! smile The 980 should offer even greater benefits.

Since we only bought our GTX970 today, we obviously haven’t had the time to do exhaustive benchmarking. We’ll try to do more of that in the following days. Some of you already own these cards so I’m sure you’ll be posting your results too."

source: http://render.otoy.com/forum/viewtopic.php?f=9&t=42475&sid=b0226bef1830e1e8bb2f6bc5aa4d5735&start=50

this is exactly what I’m saying, from my CUDA development perspective, I can see that Maxwell is a beast "it is designed in a way similar to GTX 580 (Fermi) " , but it can do all ninja stuff that GTX 780 can do (Kepler) , in fact if someone designed good kernels for Maxwell, it should run near 2x faster than GTX 780 in raytracing

@@MohamedSakr
2x faster maybe a little too optimistic but 2x4GB is quite probable :slight_smile:

Is 6pin vs 8pin power connectors an issue?

So the power savings due to load handling doesn’t happen when doing 100% continuous load rendering? Hmmm.

I have a GIGABYTE GTX 770 WindForce (factory OC). This card (Rev.2) has 2x 8-pin power connectors.

The GB WindForce 970 has gone back to 1x 6-pin + 1x 8-pin connectors.

The reference 970 980’s have 2x 6-pin power connectors? I only did a quick search… is this true?

Does it matter? More heat (when rendering)?

-LP

It will only consume what it uses. The extra pins only really matter for stability if your over-clocking.

I managed to make time to read that whole article and do some googling today.

There is a chart in the article (Power Consumption Under Max. Load) showing the Reference 980 exceeding specs of the connectors.

Specs:
75W (slot) + 75W (6-pin) + 75W (6-pin) = 225W
75W (slot) + 75W (6-pin) + 150W (8-pin) = 300W
75W (slot) + 150W (8-pin) + 150W (8-pin) = 375W

So the ref 980 is a 225w card which pulls 234w under max continuous load according to the chart. That’s 9 watts over the spec of its connectors. Under that chart is the comment “The reference card’s two 6-pin PCIe power connectors start looking a bit out of place in this context, as they might just not be enough”. I’m trying to figure out exactly what that means.

I’m not sure what 10w over spec will do but probably no fires or melting connectors or scorched cards. It will make more heat tho I’m sure of that. How much more heat and will it effect performance is what I’m curious about.

Right under that “might just not be enough” comment is “Finally, let’s take a look at the detailed measurements. The much smaller load adjustments are illustrated graphically below. Pay particular attention to the reference card’s small drops when it hits the thermal limit”.

Unfortunately the chart for the reference card is missing on the webpage but regardless of that -is the thermal limit being hit because of the connectors? Is that what they meant?

Edit: I’m new to high-end video card tech, hence the questions about what I think this article is telling me. I think it’s telling me the Nvidia GTX-980 reference card is no good for Cycles rendering because of performance throttling due to heat from inadequate power connectors.

I’ll find out soon. If true I reckon poor performance for $600 will generate some discussion here at BA by some disappointed Blenderers.

-LP

Stress Test Power Consumption section is mid page here:

Well for cryin’ out loud. I just figured out there’s 2 versions of the same article from the same day.

The original:

The corrected version:

Our original Nvidia GeForce GTX 980 reference sample suffered from a BIOS issue that caused a higher-than-expected power draw. We flashed the card with the reference BIOS and have updated the charts below with the new results.

One would think that they would find a way to post that edit statement on the original article page! Sheesh.

So the new Geforce GTX 980 Reference PCIe Total is 122.70 watts not 233.64 at max load? Over 20 watts lower than the gaming load? I doubt that. I can’t trust anything in this article now. I don’t think they know what they are doing.

-LP

After the 8600m GT and 9650MGS and all those defect cards they selled customers over many years and generations (http://www.nvidiadefect.com/ http://www.geek.com/apple/apple-loses-court-case-on-defective-nvidia-gpus-in-macbooks-1484061/ http://www.tomshardware.fr/articles/NVIDIA-8600M-8400M,1-38342.html http://www.tomshardware.com/news/nvidia-geforce-faulty-defect-gpu,7795.html), I wouldn’t trust any of these test and wait for some month of feedbacks from customers. I was one of those unlucky customers and didn’t had the money and time for many years of court, so I just had to admit I loosed a lot of money.

@LarryPhillips
Its a shame that they didn’t re do the whole suite of Benchmarks, to see what the performance is on the new bios.

I did have a reply for your previous posts, but it took so long to write i was signed out and lost everything I had written. I’ll re write it when I get home from work.

Can you back up this claim that NVIDIA sold faulty chips “over many years and generations”? All these links point to essentially the same issue within a single generation. I’ve been an NVIDIA customer for 8 generations, over which only a single card broke down.

If you buy something, there’s most likely a legal requirement for minimum warranty, depending on where you live. If your product breaks down after that, you’re screwed. That’s just the way business works, nobody is going to sell you something that lasts forever.

About the power draw issue: You may have noticed that NVIDIA never uses the term “TDP” in their specs, it uses “Graphics Card Power”. AMD once invented the term “ACP” (average CPU power) because it didn’t like how the high TDP of their CPUs looked, next to the low TDP of Intel CPUs. It’s all marketing bullshit. To be fair though, the measured stress-test value for the reference 980 is only about 10W higher than advertised. The overclocked Windforce does come with dual 8-pin connectors, so it’s not outside spec either.

Oh don’t go to too much trouble. I’m just fuzzy about if too much heat from connectors can cause a card to throttle. It all boils down to the question “Will a reference 980 card have performance issues when rendering in Cycles due to the 6-pin connectors?”

Personally I’m in no hurry to find out because I just bought my Windforce 770 last month. I purposely didn’t wait for the 900’s in spite of all the hoopla about power savings and performance. I can’t afford to gamble on hope.

On the other hand, I hate to see folks who do jump in first get disappointed. Blowing $500 can be very disappointing.

-LP

Sorry i forgot about this thread again. When we talk about thermal throttling the area in question is the actual GPU die and some of the on board power regulation chips. The connectors are relatively speaking far from the die itself so the extra heat in the connectors isn’t an issue in terms of throttling. (TBH i don’t think the system even has a way of knowing how hot the connectors are)

Unfortunately tom’s hardware didn’t redo all the benchmarks but I think with the 6-pin connector on the reference board the card may have trouble sustaining the clock rates due to the voltage regulation that would try to limit power draw.

The best way to test it is to find someone with a reference card and bench it against a non reference card on a scene that takes longer than 30 mins to render and see if there is a large disparity between render times.

2 of my friends had a 8600GT with that problem (see previous links), i did buy a generation after the 9650M GS also with this same problem :https://www.google.de/search?q=overheating+9650m+GS
Nvidia rebranded the G84M chips on 3 Generations from 8xxx to 1xx through 9xxx : http://www.notebookcheck.net/NVIDIA-Quadro-NVS-140M.4216.0.html and http://www.notebookcheck.net/NVIDIA-GeForce-8600M-GT.3986.0.html
Even Nvidia insurance didn’t want to cover it: http://www.law360.com/articles/104297/nvidia-wants-to-stop-insurer-s-defective-card-case

That’s what I thought. That’s why the remark “2x 6pin not enough” in that article followed by “watch it hit the thermal limit” in the test charts really threw me.

Unfortunately tom’s hardware didn’t redo all the benchmarks but I think with the 6-pin connector on the reference board the card may have trouble sustaining the clock rates due to the voltage regulation that would try to limit power draw.

Aha! Now your earlier reply about stability makes sense to me. Thank you for telling me the why part. That’s the part I was missing and I couldn’t find after much searching (in the wrong places, obviously).

Thanks again.

-Larry