Threadly reminder Nvidia has this waiting in the wings
https://www.nvidia.com/en-us/data-center/tesla-v100/
Probably waiting for yields and enough product to launch and soon we'll get this.
Vega will be BTFO
>>61671407
>DOUBLE-PRECISION 7 TeraFLOPS
>SINGLE-PRECISION 14 TeraFLOPS
>CAPACITY 16 GB HBM2
>BANDWIDTH 900 GB/s
thats double the bandwidth of vega shit
>POWER Max Consumption 250 WATTS
AMD are fucking finished, their nearest card is the Vega Frontier that uses fucking 400watts and requires a 1kw psu and its only 13.x teraflop
Nvidia could probably push the Volta to 15tera flop easy with OC'ing
>>61671432
It's already over 15 on nvlink. cut off the shit for fp64, and a workstation/desktop card would have a small enough die to wrek 17-18 teraflops. Good yeilds too.
>>61671407
...It's also $8000 a pop, so there's that.
>>61671501
Sweet i cant wait
>>61671512
>Small quantity niche workstation card
Its actually not that bad considering the AMD equivilant has worse performance and terrible thermals and costs $6999
>>61671512
Actually more than that; DGX Station with 4x V100 costs $69,000. DGX-1 with 8x V100 costs $149,000.
>>61671407
>NO VRAM
Ay fucking mao
>>61671592
Are you dense?
>>61671407
Should i get the DGX-1 or DGX station?
>>61671617
If you're rich enough to afford it you shouldn't have to ask.
>>61671622
i just really dig the gold
>>61671407
> Vega will be BTFO
In terms of waiting.
>>61671636
http://www.tweaktown.com/news/58553/nvidia-ceo-hands-first-tesla-v100s-ai-researchers/index.html
>>61671596
Lol
>>61671407
the biggest current waffer is 300mm
pascal 2.0 is on 815mm2.
the yields are so low that a notworthy stock will take MONTHS
>>61671731
make more die zize :DDDDDDD
>>61671432
>thats double the bandwidth of vega shit
4 stacks
>>61671683
>16GB HBM2
Not anon you replied to
yeah you dense.
Also that Tesla V100 is imprive but no I/O so its useless for content creators I'd say its for AI Deep learning AI IMO.
Just a reminder that AMD doesn't have AI locked down....yet.
Smart move by Nvidia to forus on things they have to make money and in AI cars.
>>61671432
>CAPACITY 16 GB HBM2
Don't forget NVlink allows shared memory access, which you can already do with two PCI-E GP100 for 32GB unified VRAM.
GV100 won't be in any consumer products, maybe a Quadro GV100 card for workstations
GV102, GV104, GV106, GV107 will though and it will be the end of AYYMD POOGA HOUSEFIRES