[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

So how much of a jump would Volta be compared to Pascal and Maxwell?

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 93
Thread images: 5

File: Nvidia Volta GPU release date.jpg (149KB, 590x332px) Image search: [Google]
Nvidia Volta GPU release date.jpg
149KB, 590x332px
So how much of a jump would Volta be compared to Pascal and Maxwell?
>>
10-15% from pascal
>>
>>61602335
10-20% perf with 10-20% bigger dies.
>>
File: 8m7oeb18g0cz.jpg (49KB, 750x729px) Image search: [Google]
8m7oeb18g0cz.jpg
49KB, 750x729px
>>61602353
So Pascal 1080Ti is about 11.5 Teraflops....Volta will be 14 Teraflops? That's insane.

Also nVidia will switch to HBM2 starting with Volta so that will definitely make things faster I think.
>>
>>61602335
Went from 12.5Tflops to 15Tflops, on a freaking 800mm2 chip. There is a reason why porkchops pressing so much on teslav100 marketing, they want to return at least some R&D because product is mediocre.
That's it, downscale it to chip x3 smaller.

Both companies prepare for MCM, this gen going to be lackluster.
Or Nvidia simply ran out of ideas and need time for new uarch, like AMD did, they usually have 5 years cycles(GCN->vega)(fermi->maxwell).
>>
>>61602400
>So Pascal 1080Ti is about 11.5 Teraflops....Volta will be 14 Teraflops? That's insane.
No since Paxwell already hits scaling issues with this much ALUs.
GP102 is over 50% bigger than GP104 yet only 30% faster.
>Also nVidia will switch to HBM2 starting with Volta so that will definitely make things faster I think.
Hell no, JHH will never sacrifice his shekels.
>>61602410
V100 has fuckton of useless (for gaymen) fixed-function shit.
>>
>>61602400
>Also nVidia will switch to HBM2
they would have to pay a lot of royalties to AMD
they do HBM only in small volume because of that
HBM going mass market though, because it's cheaper in the long run. Heck even SSDs are going 64 layer.
>>
>>61602420
https://www.nvidia.com/en-us/data-center/tesla-v100/

They're literally using HBM2 on the V100's.

They will likely move to HBM2 as it's becoming a little more mainstream now.
>>
>>61602451
he means on consumer cards that sell by millions not a few thousand
>>
>>61602448
Toshiba recently announced 96layer BiCS 3D NAND (which, hilariously enough, uses TSVs).
>>61602451
V100 is a very low volume product.
Porkshoulders-kun will never use HBM2 in mainstream shit unless DDR PHYs are seriously bloating the die.
>>
>>61602353
no dude...
it's a fucking insane jump
>>
>>61602469
anon, please translate marketing speech for yourself
leathehrjacket marketed tensor cores like they are something new, they aren't
google has tensor cards that are x20 faster than volta and draw less x5 less power
they had to do whole tensor bullshit because actual GPU performance jump is small, cheap trick
>>
>>61602497
V100 would be an interesting all-in-one ML offering if not the fucking die size thus price piercing the heavens.
>>
>>61602513
I think there is a huge reason why nvidia makes only 300m from compute cards and almost billion from gaming cards
asics are better at deeplearning now. simple as that.
>>
>>61602542
GPUs are good at inference though.
AMD needs to make lower clocked SFF Vega10 Instinct card for inference.
>>
>>61602335
I don't know but AMD is totally gonna beat it, just you wait
>>
>>61602555
It's a race now.
The first company to achieve MCM GPUs will win the market for several years.
Volta (as in V100) is a big oversized meme.
>>
>>61602335
i feel like gpu's havent really progressed past 2013 with the 290 and 2016 since the 1080

We seem stuck in a bit of a loop

When Volta and Navi drop in 2019 will be the time to get a new gpu
>>
>>61602659
Volta is a fatter Paxwell and we don't know anything about Navi.
>>
>>61602451
They used HBM2 for the Pascal Tesla too, so why isn't Pascal using it?

>>61602469
Pascal wasn't even intended to exist, it has a lot of the hardware improvements Volta would have had vs Maxwell. Pascal is essentially the midway point.
>>
>>61602555
just wait(tm)
>>
>>61602448
>They would have to pay a lot of royalties to AMD
HBM2 begin free standard from JEDEC.
>>
>>61602335
Volta is next architecture,something like 8800 series in Performance/watt.

Poor AMD
>>
>>61603671
NVIDIA would need a brand new architecture for that, Volta is still based on G80
>>
>>61603671
Just like Pascal was the next architecture...
...right?
No.
>>61603681
That's not going to happen any time soon.
>>
>>61603686
>That's not going to happen any time soon.
It doesn't need to, G80 was a fantastic architecture.
>>
File: 1430725832538.gif (253KB, 447x415px) Image search: [Google]
1430725832538.gif
253KB, 447x415px
>>61602335
Does it really matter?

Even if it would be a complete stalemate compared to Pascal they would still be miles ahead of AMD in both performance and performance/watt

The amount that AMD is behind in the high end gpu segment right now truly marks a low point in high end gpu competition.
>>
>>61603696
The key word is "was".
Core was also a fantastic evolutionary uarch.
Too bad Zen murders it violently.
>>
>>61602335
How well it would compare with a 3Dfx part from an parallel universe where they're still a thing?
>>
>>61603722
How does that apply to G80?

You're dragging in completely unrelated stuff
>>
>>61603722
>The key word is "was".
Not really, unlike Intel they aren't showing any signs of struggling to get more perf out of their architecture.

Intel haven't made any significant gains since Sandy Bridge while Kepler -> Maxwell and then Maxwell -> Pascal were significant on their own.
>>
>>61602335
I would guess 10%, but with new non-monolith design making yields 100% better.
>>
>>61603745
Getting moar performance with GPUs is piss easy: just add moar ALUs.
Pascal is exactly that.
Volta is exactly that (now with moar fixed function units).
>>
>>61603770
Except Pascal is more power efficient too. The 980 Ti could pull over 1kW under LN2, you can't even do that with Pascal.
>>
>>61603784
Wow, a two node jump!
Un-fucking-believable.
>>
>>61603770
Kinda.
Shaders an shiet are starting to need more conditional branching performance, which means having smarter schedulers, more fine grained core clusters etc..
>>
>>61603794
Too bad AMD can't even achieve that much.
>>
>>61603809
Ha ha.
(You) tried.
>>
>>61603770
>Getting moar performance with GPUs is piss easy
Yet AMD is unable to do it
>>
>>61603811
Oh look it's this clueless shill again.

Are you going to go ahead and make up more ((((facts))))?
>>
>>61603797
Yes, AMD learned it the hard way with Fiji.
>>61603816
>>61603819
(You).
>>
>>61603809
You know that both AMD and nvidia don't actually control the node jumps, right?
Performance yes, you can discuss and argue that AMD did squat with the node jumps.

But both are sitting on their asses, waiting TSMC or GF to jump.
>>
>>61603824
>AMD learned it the hard way with Fiji.
Learned what, that they were incapable of making a high end GPU?

>that AMD did squat with the node jumps.
Oh they didn't do just that, they actually went backwards. Worse performance at the same clockspeeds.
>>
>>61602335
Clearly it'll be capable of 8k 240 fps in every game
>>
>>61603837
(You) are trying a little bit too hard.
You should stop drinking JHH's semen.
>>
>>61603856
You're in a thread about Volta, go fuck off back to the Vega general and concentrate your autism and stupidity there you clueless child.
>>
>>61603860
>retarded shitposter calling someone else clueless
?
>>
>>61603870
(You)
>>
>>61603878
(You)
>>
>tfw even AMD want to pretend RX Vega doesn't exist because it's garbage
>shills still convince themselves it's not garbage or just say to wait for Navi
>>
>>61603837
If you went a bit more high level on this discussion, you probably would find some nice, actually nice arguments against AMD and their practices when designing GPUs, but instead you just went edgy 12 year old.

What he's pointing out with fiji is that AMD was just chasing muh GFLOPS blindly and that ended not being the best of the ideas.
>>
>>61603899
Except that's all they could do, Hawaii was a bruteforce architecture designed to combat their lack of software optimization and push for lower level APIs to shift the responsibility of harnessing the compute power of the GPU to the game developers.
>>
>>61603915
And Vega is exactly the opposite of that.
And the only way to get moar perf is moar ALUs even for nVidia now.
>>
>>61603927
>And Vega is exactly the opposite of that.
Rumored to be, there's nothing to even suggest half the features they advertised exist. People keep claiming they just need to be 'enabled' in the drivers but nobody has confirmed the hardware is even there to enable. The only new hardware feature AMD even demo'd was the HBCC.
>>
>>61603915
Well, they need to hire better software engineers.
Also Vulkan don't shift the responsability to the game developers, not entirely.
It shifts it to the open source C++ compiler that vulkan use to compile the shaders.
>>
>>61603940
Considering Vega FE rivals P6000 where it works, i think it's up to the working drivers.
>>
>>61603942
Vulkan is incredibly convoluted.

>>61603952
>where it works
None of those features are involved in that, and no it doesn't rival the P6000 it loses in just about every benchmark out there.

>inb4 posting the same graph you always post
Just stop posting altogether please you don't know what you're talking about.
>>
>>61603973
>None of those features are involved in that
WHAT?
>>
>>61603952
I bet either vega or the next architecture will solve shit with neural networks instead of real silicon job.

Worked wonders for the Ryzen.

>>61603973
Yes, i'm painfully aware of it.
But i'm just adding more victims to the vulkan hell.
>>
>>61603981
>people complain about all the hardware features not being enabled in drivers and that when they are enabled the gaming performance will have a significant performance increase
>same people cite the professional benchmark performance of FE
>"What do you mean they're not involved with these workloads??"
>>
>>61604003
Now that's some next-level shitposting.
>>
>>61604015
Are you actually fucking retarded? Or are you trying to imply these hardware features are only enabled in these professional workloads?
>>
>>61604034
Пoдyмoй.
>>
https://devblogs.nvidia.com/parallelforall/inside-volta/

>The new Volta SM is 50% more energy efficient than the previous generation Pascal design, enabling major boosts in FP32 and FP64 performance in the same power envelope.
>15 TFLOP/s of single precision (FP32) performance
>7.5 TFLOP/s of double precision floating-point (FP64) performance

THANK YOU BASED NVIDIA

AYYMD HOUSEFIRES SIMPLY CAN'T COMPETE WITH REAL PERFORMANCE & POWER EFFICIENCY
>>
>>61604058
Thanks for confirming
>>
>>61604083
Hy пoдyмoй.
>>
>>61604081
Unlike Intel, NVIDIA aren't just being complacent with their superior hardware and dominance in the major markets.

If AMD don't step their shit up in a major way with Navi (seeing as Vega is already a massive failure) then they may never recover.
>>
>>61604098
Unlike nVidia, Intel has actual revenue and produces something else besides toys.
>>
>>61604108
Funny how Intel are incredibly threatened by NVIDIA then.
>>
>>61604110
Where? In HPC where there's barely ANY money?
>>
>>61604114
>barely any money
>NVIDIA's revenue grows 48% year over year purely from the data center
>>
>>61604128
Wow, it was $150 million and now it's $300 million! Intel cannot compete!
>>
>>61604144
>In the recent quarter ended April 30, NVIDIA's revenue increased by 48% reaching $1.94 billion compared to previous year. A big revenue bump came from its Data centre business which recorded $409 million revenue in the first quarter of this fiscal, up 186% year-on-year.

It's time you went back to /v/ you utterly clueless child.
>>
25% min

30% would me my guess

45%+ is god tier architecture
>>
>>61604150
Wow, Intel literally cannot compete with their ~$4 billion quarterly data center revenue.
Literally finished and bankrupt.
Long live GPGPU.
>>
>>61604163
<<</v/
>>
>>61604170
Yes, this year will totally be the year of GPGPU compute!
Fucking hell GPGPU is literally linux desktop of hardware world.
>>
>>61604174
Great shitposting there /v/
>>
>>61604176
Jensen fucking stop.
Everyone knows GPGPU will always be a niche meme in datacenter.
>>
>>61604183
>niche
Ahahahahahaha oh wow /v/ you're adorable
>>
They could literally release the same line up, call it Pascal Pro and drop prices by $70 on each GPU and people would still buy it.
Then they could do it again and drop another $70 while naming it Voltaris or something.
By 2020 when AMD releases Navi and finally manages to reach 980 Ti performance which now would cost only $500 they could release Volta.

Nvidia could do all of that just like AMD did with R200, R300 and RX400 and they wouldn't lose 1% of market share, that's how far ahead Nvidia is.
>>
>>61604190
Oh yes call me when GoyPU can work as fileserver.
>>
>>61604203
You're not even trying now /v/ come on
>>
>>61604208
Jensen stop.
This is getting hilarious.
>>
>>61604216
What's wrong /v/ have you run out of shitposting material? Consider checking Reddit again since you clearly do that on a daily basis.
>>
File: r300.jpg (2MB, 1788x1785px) Image search: [Google]
r300.jpg
2MB, 1788x1785px
>>61604202
But R200/R250 was shit.
ATi did the smart choice of canning it and developing R300.
>>
>>61604224
This year will totally be the year of GPGPU compute!
I-i swear.
Buy our GPUs.
>>
>>61604239
Oh maybe you actually were trying but you were just too stupid to come up with anything decent.
>>
File: 3feces.png (378KB, 440x668px) Image search: [Google]
3feces.png
378KB, 440x668px
>>61604253
>E-everyone is totally going to use GPGPU compute, you just wait CPUdrone!
>>
>muh gaymes
>>
>>61604263
(You)
>>
>>61604202
1080 Ti*
>>
>>61602335
*rumors
Thread posts: 93
Thread images: 5


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.