Nvidia unveils Pascal; Bringing in the era of slimmed down GPU units

On the heels of AMD’s unveiling of the Fury X chip, what I’m really liking here is how both companies are bringing a trend of shrinking board sizes, shrinking power consumption, and shrinking thermal generation. Hopefully these next generation of cards will mean ultra-fast GPGPU tasks without the expensive cooling system that otherwise needs to be present.

Also on that, we have recently heard of Intel bringing the Phi a step closer to being a consumer-level product, so we could be seeing a three-way race on home supercomputing cards and creating a future where our current machines seem slow in comparison.

About time too, gotta give a good reason for people to start buying new machines every three years again rather than the 5 to 10 years we’re seeing now.

Apparently from what I read in the article, the architecture can potentially support the inclusion of up to 32 gigs of memory, now all that needs to happen is to really expand the breadth of instructions it can take and we really will see a revolution.

Comments?

You talk about Shrinking board sizes / power consumption / thermal generation… this is side effects of shrinking the size of the transistor… smaller the transistor the less power it needs… less power means less heat.

That being said the smaller you get, the more you deal with phenomenons, such as quantum tunnelling, something which the 10nm will transistor is already experiencing…

It is estimated that 5nm transistors will be the smallest we can get to, with the current processors… once they hit that, the way they will get faster is by putting more transistors (cores) on the board… more cores = more power… more power = more heat again.

EDIT
Also your heading is quite clickbait… “Brings an end to the era of monstrous GPU sizes”… no. it is not… the PCI Graphics card size will be around for many years to come. it is not going anywhere soon.

Hopefully by then we would’ve moved beyond the silicone-based chip to one based either on DNA, optics, nanotubes, or graphene (I know HP at least is actively working in this area, the area of alternative computing technologies).

Right now, all of those different transistors are really expensive…i would estimate that we will get to 5nm and we will just hold there for about 5-10 years before switching to other forms

I think half of the people on this forum would jump at the chance for a 32 gig vid card, Even if it was in a box you had to mount next to your tower.

What instructions? To my knowledge, the instruction set of PTX is fairly complete, what’s holding back GPU versions of most algorithms is figuring out how to make something run in 10,000 threads without consuming 10,000 times the memory.

Anyway, this is exciting news, thanks for sharing it.

Thing is you’re already having a glimpse of the heat dissipation problems on newest Intel CPUs. Heat is so concentrated that you have trouble getting rid of it even if it’s way less than previous generations.

You’ll see that both GPU vendors will have to pack a CLL for the highest end as current heatsinks won’t cut it anymore.

32 gig of hbm would price it out of the amateurs price bracket. you would be talking quadro or firepro prices. few here can even afford a titan. i dont think i have seen a firepro or quadro in the benches i’ve seen on here.

I have seen a few people buy older quadros just because they have the qudro name… They cried a little soon after benchmarks lol

Quadros are more than a name. 30-bit color output, more ram, more display port connectors, lower profile, better build quality, better drivers for pro apps. If I was running a gpu compute farm I would outfit the servers with Quadros, as they would likely handle the duress of running 100% 24/7 more than a GeForce would.
Most hobbyists wouldn’t really find those kind of benefits worth the significantly higher prices though.

Eh, I’ve used a couple different quadros, and I disagree. I use autocad extensively at work, and I notice no performance difference between running it on a quadro or on a geforce. more color depth would be nice, if I had a monitor that supports it (and irrelevant for a gpu compute farm). build quality feels pretty similar to me, never had longevity issues with my geforce running in parallel with the quadro. I haven’t been running it through 24/7 rendering, but I do use it extensively.

On the bmw benchmark, I get 3:34 on my 750ti 2gb ($125) and 2:55 on my k4200 4gb ($785). that’s 22% faster on the quadro for a 628% cost increase. Compensating for the performance difference, it’s 5 times more expensive for the same level of performance. even if there were longevity issues, just buy 5 times more geforces and throw them away when they burn out, you will still come out ahead.

I didn’t say anything about performance. GeForce and quadro are essentially the same chips except for a few obscure features disabled on the GeForce. While I have heard of performance gains in 3D apps with the Quadros, overall I’ve not noticed a huge enough difference to justify the cost difference. Many of the other advantages, however, really do matter to some people. If they don’t matter to you, then good, you will save a big chunk of money and still have what you need. My only argument is that the quadro line offers potentially important advantages depending on your needs. It’s not just a name.

From a gpu accelerated rendering standpoint (the context of this conversation) I see no benefit to invest in a Quadro. If you are considering purchasing a gpu with the intention of accelerating cycles renders, a Quadro should not be on your list.

Yes, it is more than a name, there is more than just marketing in there (though there is an awful lot of that, too). But for almost anyone on this forum, they don’t bring much to the table.

google the amd nano. last gen you needed a full tower, thanks to stacked hbm you can fit it in a mini tower. it got a 14 inch card down to about 8 inches, its 1.5 times long as it is wide. its more the result of hbm than the actual processor.with hbm even standard cards will only be twice as long as they are wide. half the card length used to be gddr5, and that can now be eliminated.

The AMD s9170 with 32gb are just coming out, but who can afford cards at 4000$ or more, and besides ram capacity, what would be the advantages of a single card that is actually slower then the gaming equivalent?

Anyone in the right mind knows instead of buying 1 Quadro you can afford to break 5 Titans… And the best Quadro has the same amount of ram as the Titan X

also, I know you already changed the thread title, Ace. But seriously, gpu units?

Do you go to the ATM machine and enter your PIN number?

The two big highlighted answers are what may matter to you. Hopefully this settles the Quadro matter.
Only thing i didn’t read there (but i may have overlooked) is that, while they do state Quadro cards use the same chips as GTX, those are in fact downclocked to improve durability at the cost of speed.

As for the FirePro, i’ll leave that half of the googling work to you :wink:

Unless money holds absolutely no value to you because you have tons of it, GeForce will always be the better deal unless Nvidia makes their marketing less evil.

The main thing with Quadro is that it is able to do continual renders for a long periods of time. So in an animation studio where they would have their own farm or for a farm owner that is why they would use Quadro. Titans and Geforce would just burnout where as the Quadro would keep on ticking. Longevity is the main reason why they are bought. Which was Fahr’s main point I believe.

You’re right though, for regular people it is more advantageous to get a Titan. People who buy a Quadro for personal use probably just don’t understand this and buy into the marketing.