[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

https://www.nvidia.com/en-us/data-c enter/volta-gpu-architec

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 31
Thread images: 3

File: 1494457422067.jpg (214KB, 1250x660px) Image search: [Google]
1494457422067.jpg
214KB, 1250x660px
https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/

https://www.nvidia.com/en-us/data-center/tesla-v100/

https://devblogs.nvidia.com/parallelforall/inside-volta/

THANK YOU BASED NVIDIA
>>
>>60325020
You made same thread already. It died. No, Volta is beefier Pascal.
>>
Is it time for Nvidiots to JustWaitTM half a year+ for consumer Volta crap?

Hynix seems to agree
https://videocardz.com/68948/skhynix-gddr6-for-high-end-graphics-card-in-early-2018
>>
>>60325178
What does "early" 2018 mean? March, or January?
>>
>>60325233
Going by that they didn't mange to get it out for the holiday season, I'm assuming we're not looking at January.

tl;dr you're gonna be waiting for Volta like you waited for Vega, then you're gonna be waiting another 6 months for Navi unless AMD gets enough cash to rush it earlier
>>
>>60325233

Yes.

That's the only way I think Vega has a chance. A full year of Volta-less competition.
>>
>>60325261
Volta is simply a beefier Pascal.
>>
>>60325020
> GPU
> No connectors for connecting it to an actual screen

Perhaps it's got some specialized use-cases that I'm missing.

Can it mine crypto like zcash and eth?
>>
>>60325178
Lol fuck consumers Nvidia is becoming a machine learning company

Or should I say The machine learning company.

Who created The deep learning libraries that everyone uses?

Who's GPGPU framework is The one to use? Nvidia, with CUDA

What was used to make almost every major deep learning discovery over the last several years? An Nvidia card using Nvidia CUDA and Nvidia CuDNN.

Praise Nvidia
>>
>>60325275
While that's true 12nm allows them to stick more cores on it, which is a legit way to increase performance, not a very future-looking one but it's still legit.

Also nothing is stopping Nvidia to make bigger dies for Volta compared to Pascal, so instead of 470mm2 GP102 we'll get 550-600mm2 GV102, and 330mm^2 vs 420mm^2 for GV104

In fact Nvidia did exactly this for their GTX600>700 refresh, both were Kepler
>>
>>60325295
At lesser speed than a Radeon Pro Duo card, but it can
Just for 150000 dollars
>>
>>60325295
N E U R A L N E T W O R K S

Artificial intelligence
Inference
Currently the major bottleneck is bandwidth,so they made nvlink and now they make it even better.

Next will be memory. Expect 128gb+ VRAM GPUs in the next several years. Though they may stop being GPUs and a whole new market will open of TPUs, Tensor Processing Units.
>>
>>60325295
>Perhaps it's got some specialized use-cases that I'm missing.
Its for computing you tard, not graphics. Similar to the Xeon Phi, except it cant boot an OS. Also their GRID cards dont have display connectors and are intended for use in virtualized desktop environments.
>>
>>60325304
>>>60325275
>While that's true 12nm allows them to stick more cores on it, which is a legit way to increase performance, not a very future-looking one but it's still legit.
Nvidia promised Vega was an all new arch
>Also nothing is stopping Nvidia to make bigger dies for Volta compared to Pascal,
A bigger die on a new process it's ridiculously expensive
>>
>>60325297
So you mean they are going to commit suicide the moment ML FPGAs become a reality? Wew.
>>60325295
It's accelerator intended for pure compute tasks, hence no display output.
>>
>>60325233
Most likely Q1
>>
>>60325326
>ridiculously expensive
When has that been a problem for Nvidia? There's not much difference between 12nm FFN and 16nm+ anyway.
>>
>>60325319
NVIDIA has no proper interconnect to throw that much memory.
>>60325326
Ah, well, it's probably a preemptive measure against Vega. Also 7nm FP64 Navi will be anally devastating for NVDA ML market.
>>
>>60325345
In a few years ya dummy

They're developing nvlink first ya big dummy
>>
>>60325355
NVLINK is card to card, not die to die interconnect. NVIDIA needs something like IF.
>>
>>60325381
But Nvidia has no CPU to take advantage of IF
>>
File: 1458841170106.jpg (497KB, 1255x1614px) Image search: [Google]
1458841170106.jpg
497KB, 1255x1614px
>>60325319
>Artificial intelligence
yeah because that always ends well for humanity
>>
>>60325401
I miss her.
>>
>>60325390
They will need IF when Navi MCM's multiple dies in the same package
>>
>>60325425
That would actually explain why V100 is 820mm^2
>>
>>60325401
It's inevitable so better get ready lel

>>60325381
Yes again I said in a few years, obviously they would need to work on it. The point is there is going to be a major demand for increasing amounts of VRAM to absurd levels. If there is demand there will be supply. Whatever needs to be done, will be, money will be thrown at it until it works.
>>
>>60325445
>amounts of VRAM to absurd levels.
That's why NVRAM and Radeon Pro SSG exist, Nvidia hasn't shown any intention of moving to anything similar.
>>
>>60325445
Or you ignore VRAM and copy and paste HBCC.
>>
>>60325433
That explains a lot of things but sheer potential of MCMed GPU's is kinda insane.
>>
>>60325233
Probably May. Nvidia's always releasing GPUs in May.
>>
File: d91.gif (2MB, 331x197px) Image search: [Google]
d91.gif
2MB, 331x197px
>>60325020

>that fucking Deus Ex design
Thread posts: 31
Thread images: 3


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.