[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

https://devblogs.nvidia.com/paralle lforall/inside-volta/ &

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 26
Thread images: 2

File: image3.png (595KB, 1510x1999px) Image search: [Google]
image3.png
595KB, 1510x1999px
https://devblogs.nvidia.com/parallelforall/inside-volta/

>The new Volta SM is 50% more energy efficient than the previous generation Pascal design, enabling major boosts in FP32 and FP64 performance in the same power envelope.

THANK YOU BASED NVIDIA
>>
oh boy 50% more powerful GPUs

it's not even a question how AMD will recover, because they won't
>>
Who needs FP64 anyways?
>>
>>62446750
it's useful for adult tasks, you don't need it for gaming
>>
>>62446702
it's the same fucking node
this is bogus
>>
Nvidia is known for their compulsive lying and their fanboys having more money than brains.
>>
>>62447006
You mean AMD
>>
>>62446811
it's for tensor cores in memelearning scenario
actual GPU tasks aren't even mentioned
>>
>Nvidia

Uh oh
>>
>>62446702
For AI. Not gaming.
>>
>>62447050
>>62447065
Why don't you lying faggots fuck off?

Nvidia says FP32, not Tensor cores
>>
>>62446702
God damn I hate useless block diagrams
>>
>>62446702
>still gimping the consumer GPUs
Call when it's all FP16 divisible 'Tensor Cores'
>>
No asyc compute
>>
>>62447090
could you post a cropped image of the section where they say that?
>>
>>62447090
You only read part of the site. You forgot to read the main crux. Its a deep learning architecture. The FPU32 performances alone don't do anything significant without major tensorcore changes.

This performance gain is through wholesale, not individual parts.
>>
>>62447020
nope, nvidia was literally forced to pay all 970 owners $30 for the 3.5GB thing.

https://www.polygon.com/2016/7/28/12315238/nvidia-gtx-970-lawsuit-settlement
>>
>>62447225
But it did have 4GB, it's just the last .5GB was crippled as to not bottleneck the memory controllers. It doesn't make sense that Nvidia lost that case but AMD won their class action suit, where people accused AMD of lying about Bulldozer cores. yes, they were 8 real cores but performance was so gimped by two cores on each module it performed like a CPU using half the cores.
>>
>>62447329
The last .5GB wasn't usable.
>>
>>62447329
>Hey ill sell you a 4wheeled car!
>Good ill buy it. I really need 4 wheels.
>OOPS. the 4th wheel is actually made from wood. But you bought it now. And i didnt lie. it has 4 wheels
>wtf?
>>
>>62447329
NO. IT. DIDN'T.

It had 3.5GB of GDDR5 vRAM and then a 0.5GB chip of DDR3. The lawsuit was successful because this was proven to be true and nvidia thus lied when the 970 boxes said (4GB GDDR5).

AMD got away with their 8-core claim because their processor actually did have 8 integer cores.
>>
>>62446702
>May 10th
>muh big dies
>"data center GPU"

Why do you keep posting this? Consumer Volta isn't going to be out for almost a year.
>>
File: diagram2.png (49KB, 620x491px) Image search: [Google]
diagram2.png
49KB, 620x491px
>>62447451
>It had 3.5GB of GDDR5 vRAM and then a 0.5GB chip of DDR3
wut, no it didn't lol, you're actual retardo.

It had 4GB GDDR5 but the last 512MiB was stuck behind a second crossbar which was not only slower (since it wasn't XOR'ed with the main memory, it wasn't actually 'slower' but isolated) but couldn't be accessed while the main memory was doing an IO op.
See pic related.
>>
>>62446702
Thats just because of the tensor cores. Dont expect these bumps on customer hardware.
>>
>>62446702
Whoa, then Fiji is 80% more "efficient" than Tahiti per CU on the same node.
W H O A.
>>
>>62447366
It was usable but it could not be accessed at the same time as the 3.5GB and the link to it was slower. If you used all 4GB it may introduce stutter
Thread posts: 26
Thread images: 2


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.