>Nvidia is employing TSMC for the fabrication of its GPUs and we also know what their estimates are for performance increase with the jump to 16nm FinFET. 16FF+ will be approximately twice at dense and will be able to achieve higher speeds than before. Nvidia itself has stated a performance jump of 2x.
>Pascal GPUs are slated to be pretty damn amazing. They will feature 4X the mixed precision performance, 2X the performance per watt, 2.7X memory capacity & 3X the bandwidth of Maxwell. We will be seeing the GP104 flagship this year, and the GP100 one as well near the end
GTX 1070/1080 (GP104) GDDR5x will beat the GTX 980ti/Titan X (GM100)
GTX 1080ti (GP100) with HBM2 is for later
AMD might be able to release their GPUs first this time around. Depending on how fast they are, they could have half a year headstart.
Maybe they'll finally win back all that market share they lost.
>At least this time they were honest about it.
NVIDIA always brags about how their newer tech is better than old tech.
If anything, they lie the other way. The "970 is superior to the 780 Ti SO UPGRADE NOW!" when it's actually worse in rendering due to lower cores.
Yeah. But there is only so much they can do before a 980ti couldn't handle those games at those resolutions. . . I mean a 970 to this day can do 90% of what a 980ti can do at those resolutions.
So is GDDR5X.
According to AMD, the low memory amount could be solved easily by optimisations nobody had bothered doing because of the way GDDR5 is being used.
Not sure if they actually did though.
It was a show stopper.
It came along, first generation, offering vast improvements in power usage, reduced surface area, and x28 the bandwidth of GDDR5 and to top it off, high density.
It showed the market "We've been busy, look at our new toys!" and caused for another movement in memory technology. It opened a gateway to a whole new world of GPU manufacturing possibilities in itself. All while serving as a stop-gap until HBM2 arrives.
To say HBM was stupid is only to prove your unwillingless to accept advances in technology for the better.
Right now, there is only the one, so to specify 1/2 is pointless.
Even then, HBM, implies first edition, should it be in close proximity to HBM2, which defines a clear iteration over the original HBM.
So far as gaming goes, nVidia is only worth it in the High end, GTX980 and the like.
Should you need PhysX for the software you work with, you need to pay attention to the comings and goings, especially with the enterprise GPUs.
And then no developers are used to working with HBM, no software can be updated, no change can happen to welcome HBM2 and make as much use as possible upon it's arrival.
The point of bringing HBM to market was to allow for these changes.
No card is good enough for 4 years, unless you want to get shit performance at shit settings, then yeah, maybe you can do it.
But if you want the performance and image quality a card like the 980Ti gets you now, why would you want a shit-tier experience in 4 years? Wouldn't you want to keep everything fast and high quality?
>No card is good enough for 4 years
Radeon 7k series has been recycled for almost as long now, and it still holds its own well due to driver improvements.
8800Ti and Radeon 9700 both had performance leads so great that they worked fine 4 years later.
Obviously not at fullhd res with 16xAA on the most recent titles, but they worked great.
That's my point, sure a 7970 would do fine, but it won't provide anything close to the top-tier experience it was able to provide when it was brand new. Also, these last few years are somewhat atypical in recent history since we were stuck on 28nm shit for a very long time, which is why the first 28nm cards still hold up as well as they do.
A 980Ti will likely not benefit from that, especially since it's the last breath of a node that should've died long ago and we're essentially getting 2 shrinks with Pascal/Polaris. I would definitely not expect it to be as good as a 7970 is now in 4 years, especially not with shit like VR and 4K making a push.
So are the first generation after a die shrink typically great, or does it usually take them a few tries to get it right?
I spent 450 bucks on a GPU in 2015 and upgrade every 2.5 years, will be slightly salty if performance/dollar doubles in a year.
>green team vs red team
I hope you AMDfags enjoy that electric furnace next to you.
I'm sticking with my Intel+Nvidia combo, maybe for very cold winters I will buy an AMD CPU and GPU so I don't have to adjust thermostat.