[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

1060 vs 480 Benchmarks

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 341
Thread images: 84

File: 1060vs480.png (667KB, 3882x2171px) Image search: [Google]
1060vs480.png
667KB, 3882x2171px
Why did the 480 fail so badly? Nvidia just came out with something better in every way.
>>
>Nvidia knew about the 480 performance
>fed unrealistic expectations to the public
>/r/amd and /r/ayymd eats it right up
>regurgitated 1000x
>release a product that suspiciously does all and more of the initial 480 hype
>>
File: doom.png (108KB, 1596x635px) Image search: [Google]
doom.png
108KB, 1596x635px
>>55708184
>>
File: cry3.png (110KB, 1591x639px) Image search: [Google]
cry3.png
110KB, 1591x639px
>>55708297
>>
File: gta5.png (110KB, 1591x639px) Image search: [Google]
gta5.png
110KB, 1591x639px
>>55708316
>>
>>55708297
Gee, great. That one game and the small handful of games that will use Vulkan I can look forward too, thanks.
>>
>>55708184
What do you expect from a company which is 1bn in debt and have 1/100th of the budget their competitors have. I'm still surprised amd is still alive. They've been failing hard in the cpu division with their utter shite fx processors and their gpu division has also been vastly outmarketed to the point where even if they're competitive, they'll still sell many millions less than nvidia.
>>
>even AdoredTV admitted the rx 480 and gtx 1060 are about equal in dx12 and the 1060 wins in dx11
Literally no reason to buy an rx480 if you have a backlog of any size with dx11 games.
>>
File: Screenshot_2016-07-21-10-04-42.png (498KB, 1920x1080px) Image search: [Google]
Screenshot_2016-07-21-10-04-42.png
498KB, 1920x1080px
>>55708328
You only get that result if you've got the absolute top of the range i7.
>>
File: division.png (79KB, 580x949px)
division.png
79KB, 580x949px
>>55708327
>>
>>55708360
>>
>>55708254
This. Nvidiots think we don't know about their false flagging
>>
>>55708349
>driver overhead
FUCK OFF
>>
>>55708254
I doubt it, AMDtards will create false hype all on their own
>>
>>55708445
Sadly AMD driver overhead is a real. Just another thing in a long list of reasons not to buy an AMD card.
>>
>>55708349
>X4 955 & I5 750
>literally half a decade old processors
I can't help but think this graph has been built with the single purpose of making one card look better than the other.
>>
File: 5.png (1MB, 1280x720px) Image search: [Google]
5.png
1MB, 1280x720px
>>55708445
>denial
>>
>>55708994
>he really thinks people who will buy a $200/250 card will have a modern i7 or even i5

these are x60 and x80 cards. literal peasant tier cards.
>>
>>55708184
>Why did the 480 fail so badly?

Why do you keep making the same thread and arguing this way from a false premise?
The RX 480 was a huge success for AMD. They sold a ton of cards, a ton of dies to AIB partners, and they sold a huge volume of chips to Apple.

AMD made a cheap die with huge profit margins. It performs well enough for its segment, and manages to hold its own in DX12/Vulkan despite being 15%~ slower than the 1060 in DX11 titles.
Nvidia made a more expensive die, probably with lower profit margins, and they can afford to do that because they have immense market share. Their bread and butter will be GTX 1070 and 1080 sales anyway.
>>
>>55709222
The 480 is pretty awful compared to the 1060. They're the same price, but the 1060 is way faster.
>>
>>55709250
this. i was looking through a bunch of benchmarks and some of them make the 1060 look like it's a whole tier ahead. there would be 3 or 4 cards in between the 1060 and 480 when in reality they should be right next to each other.
>>
File: perfrel_1920_1080.png (42KB, 500x1170px)
perfrel_1920_1080.png
42KB, 500x1170px
>>55709250
>>55709341
By Nvidia's own marketing PR, the GTX 1060 is on average 15% faster than the reference RX 480.
That performance edge just isn't there in a number of DX12 titles, not to mention DOOM when using Vulkan. By TPU's review the RX 480 is only 10% behind the GTX 1060 at 1920x1080, and they're lineup of games includes a bunch of Game Works titles.

You're going to have to try a lot harder if you want to shitpost
>>
>>55709222
>Their bread and butter will be GTX 1070 and 1080 sales anyway.
Not if they stay at that price. Essentially what Nvidia did, was introduce two newer high end cards and give us essentially a GTX 980 for measly 250 bucks. If you're on 1080p (which most people are), a 1070 is overkill and way too expensive. The 1060 is perfect, both performance and price wise. The RX 480 is not a bad card, but it got official meme status, ruining it's reputation for ever and all duo to overly retarded AMD fans and AMD itself. It also is quite a bit slower on DX11 which will stay relevant for quite some time, no matter how much AMD fans wish otherwise.

The only real argument one could use is that the RX 480 seems a bit more ''future proof'' (it's actually a meme). Now hear me out on this one, you are much better off just switching your mid range cards every two years, instead of sitting on a high end card for 3-4. Unless you are extremely poor (and I don't even mean to insult anyone) and can't afford to dish out ~250 bucks every two years, the 480 makes more sense than the 1060.
>>
>>55709374
11%*
>>
>>55709374
are you thick in the head? how am i shitposting? the same website you just posted a benchmark from proves what i just said about how the 1060 looks like it's a whole tier ahead with 3 or 4 cards in between them in some benchmarks.

https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1060/8.html
>5 place gap

https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1060/9.html
>5 place gap

https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1060/10.html
>3 place gap

https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1060/11.html
>5place gap

https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1060/14.html
>3 place gap

https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1060/15.html
>2 place gap

if anything i was being generous. now neck yourself. go and be mad somewhere else.
>>
>>55709451
>counting cards as "places" instead of looking at relative performance
This is either top tier comical bait, or you're legitimately a non neurotypical autism spectrum case.
>>
>>55709471
>relative performance
How exactly is that calculated? Do they take a bunch of benchmarks and calculate a benchmark score on that? Or is it simply what the graphics card is capable off in perfect conditions and 100% hardware power?
>>
>>55709511
They average out the figures in all of their game benches. Their methodology is explained:

>This page shows a combined performance summary of all the tests on previous pages, broken down by resolution.
>Each chart shows the tested card as 100% and every other card's performance as relative to it.
>>
>>55709471
>relative performance

i can't tell if you're legit retarded or just thick in the head. the relative performance chart agrees with me by putting the 1060 a massive 11% ahead which is almost another tier. there is only a ~15% difference between a 980 and 970 at release.
>>
>>55709540
>lacking this much self awareness
The relative performance charts don't agree with any point you tried to make, autist. You're counting the number of cards between the RX 480 and GTX 1060 as if it means anything. It doesn't. The "places" that you're getting so mad about is something that only sticks out in your genetically inferior malformed autism monkey brain. "Places" on a chart is not a real metric.

Exactly as I previously stated:
Nvidia's marketing PR slides indicated that the GTX 1060 was 15% faster than the reference RX 480.
TPU's game average show the RX 480 being 10% behind, and that is including a ton of Nvidia Game Works titles.
The GTX 1060 doesn't have a huge performance lead in DX12/Vulkan.
A 10%~ average lead isn't that big of a deal no matter how you try to twist it

The RX 480 isn't bad by any means. They're both great value oriented mid range GPUs, Nvidia's GTX 1060 is just on average 10%~ faster in DX11 titles.
Stop getting mad over things, autist. Not that I really care. The mentally ill barely qualify as human.
>>
>>55708184
>no async
>no DX12
>no Vulkan
1060 will age like milk
>>
$200 card vs $300 card

They're not even in the same bracket.
>>
I'm a AMD fanboy, but if i find a 1060 for the same 480 price I'm going NVIDIA, sorry poo in the loo
>>
>>55709620
so you really are retarded then. at least we got that cleared up.

i wasn't talking about metrics if you actually read my post. i said "it makes it look" like they are in different tiers. anyone who isn't knowledgeable about any of this will look at these lists and see some huge gap between the two cards and probably won't know they're actually competing against each other until they're told.

>A 10%~ average lead isn't that big of a deal no matter how you try to twist it

no. that's pretty fucking massive, and like i said is almost a tier above. the 980 was 12% faster than the 970, the 390x was 7% faster than a 390 and a 1060 is 11% faster than a 480. (all based on tpu's relative performance charts)
>>
>>55709707
Uhh 1060 is $249 and 480 is $239
>>
File: value.png (43KB, 500x1210px) Image search: [Google]
value.png
43KB, 500x1210px
>>55709391
Seriously the 1060 is going to sell extremely well. At $250 it's a ridiculously good deal, in fact it is the #1 best value GPU you can buy right now.
>>
File: 1467237690375.jpg (25KB, 540x540px) Image search: [Google]
1467237690375.jpg
25KB, 540x540px
I know this is a bait thread, but the cheapest 480 ($200) is 20% cheaper than the cheapest 1060 ($250), but only performs 12% worse.

That's okay by me.
>>
>>55709802
That's the 4GB 480 which performs even worse
>>
>>55709818
Saphire nitro 4gb was performing almost the same as a FE 1060
>>
>>55709620
I'm going to screenshot this in case the 490 is 10% faster than a 1070 so I can say the king of amdrones said a gap of that amount doesn't matter.
>>
>>55709620
>The RX 480 isn't bad by any means.

It's really horrible when you compare it to the 1060. There is literally no reason to buy a 480 when the 1060 exists.
>>
>>55709707
outside burgerland, the 480 reference is more expensive than a custom 1060
>>
480 or 1060 to pair with my i5 4670k?, I'm hearing the newer amd cards only like newer cpus
>>
File: mongrel.jpg (55KB, 446x583px) Image search: [Google]
mongrel.jpg
55KB, 446x583px
>>55709620
>10%~ average lead isn't that big of a deal
>>
>>55709927
1060 obviously. aib 1060's are the same prices as reference 480s and fuck reference cards.
>>
I want that Sapphire RX480 so bad, gonna OC that shit to 1400 and have better performance than a 300$ AIB 1060 in 90% of games (it beats a 2100MHz OCed AIB 1060 in almost every bench)...

that's nuts. Sign me up, only 280$

Sad thing is the 1060 has absolutely no OC potential because clock speed doesn't even remotely help improve performance.... 100mhz is like 0.5% performance boost after 1500Mhz
>>
>>55709887
>me
>fanboy
>plainly stating the GTX 1060 is a great value oriented GPU with an edge in DX11

Sure, but a performance margin of that size really doesn't matter.
10% turns 30FPS into 33fps, 60fps into 66fps, 120fps into 132fps. Its not enough of a performance difference to afford you any more AA, post processing, or higher shadow quality without losing frame rate. Realistically it doesn't matter. Things like frame time variation matter a lot more, and if you wanted to argue that Nvidia was better there on average you might have something.

>>55709944
It really isn't.
>>
File: perf_oc.png (31KB, 500x610px) Image search: [Google]
perf_oc.png
31KB, 500x610px
>>55709954
>10% oc to base
>13% performance increase in bf3
>>
>>55709927
1060 is way better
>>
>>55708184
is that bf4 benchmark only with dx11? i wonder how well it does with mantle.
>>
File: 1370535337607.jpg (29KB, 570x533px) Image search: [Google]
1370535337607.jpg
29KB, 570x533px
>>55709954
>300$ AIB 1060
>Sad thing is the 1060 has absolutely no OC potential because clock speed doesn't even remotely help improve performance.... 100mhz is like 0.5% performance boost after 1500Mhz

AMDrones are so pathetic these days.
>>
>>55710100
yes we need more reviewers doing benchmark comparisons of NVIDIA and AMD using Mantle in BF4!

are you really this retarded?
>>
>>55708184
>Why did the 480 fail so badly?

Because Nvidia has 90% of tech review sites in their pockets.
>>
>>55710240
>it's all just a big conspiracy

Gotta love this one, add it to the list of AMDrone excuses.
>>
>>55710476
You know, out of all the things they yell about I am surprised no one has brought up that NVIDIA did kill 3/4 way sli for games and that the enthusiasts key only works for benchmarks and other software.
>>
>>55708994
Well, if you don't have a top of the line i7, one of those is better. The only person with a high end i7 I know has a Gtx 1080.
>>
>>55711029
AMDfags can't grasp that people going for budget build will either have old ass cpu or will buy i3. Hilariously even the shitty amd cpus are getting more fps with nvidia cards.
>>
File: 1460731446403.png (2MB, 1141x1277px)
1460731446403.png
2MB, 1141x1277px
>>55708184
the reason why nvidia had stronger dx11 drivers was because they were multi-threaded which helped lower driver overhead since it could be spread out across multiple threads.

but another reason was because of their use of a software scheduler.

one of the reasons why fermi ran so hot was because it utilized a hardware scheduler, just like all amd gcn based cards do. hardware scheduling draws a lot of power and more power means more heat. why did they use a hardware scheduler? a hardware scheduler will always be faster than a software one. less overhead, and the gpu can do it much faster than software.

the problem with a hardware scheduler? once built, you cannot modify it. you have to build a whole new card if you update the hardware scheduler.

but nvidia wanting to move on from their house fire fermi's decided to remove hardware based scheduling with keplar and beyond. this is the main reason why keplar used far less power and ran cooler than fermi. nvidia realized with dx11, you didn't need a complex hardware scheduler. most of the scheduler went under utilized and was overkill. with dx11 multi-threading capabilities, and making their drivers multi-threaded, it help alleviate a lot of the driver overhead one would endure with utilizing a software scheduler. in turn this gave them the opportunity to now have more control over scheduling. able to fine tune the drivers for individual games. well, they had to. this caused a lot of work on nvidia's driver team, but it helped them max out every ounce of juice they can get from their cards and lower power and reduce heat.

maxwell continued this by removing more hardware based scheduling.

the problem? dx12 and vulkan need a hardware scheduler to be taken full advantage of. you need it for the complex computations of async and to manage compute + graphic operations at the same time. they're complex, and you need the performance.
>>
>>55711086
going to be useful in graphics and games.

here's a nice article from keplar's launch done by anandtech:
>http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/3

>GF114, owing to its heritage as a compute GPU, had a rather complex scheduler. Fermi GPUs not only did basic scheduling in hardware such as register scoreboarding (keeping track of warps waiting on memory accesses and other long latency operations) and choosing the next warp from the pool to execute, but Fermi was also responsible for scheduling instructions within the warps themselves. While hardware scheduling of this nature is not difficult, it is relatively expensive on both a power and area efficiency basis as it requires implementing a complex hardware block to do dependency checking and prevent other types of data hazards. And since GK104 was to have 32 of these complex hardware schedulers, the scheduling system was reevaluated based on area and power efficiency, and eventually stripped down.

>The end result is an interesting one, if only because by conventional standards it’s going in reverse. With GK104 NVIDIA is going back to static scheduling. Traditionally, processors have started with static scheduling and then moved to hardware scheduling as both software and hardware complexity has increased. Hardware instruction scheduling allows the processor to schedule instructions in the most efficient manner in real time as conditions permit, as opposed to strictly following the order of the code itself regardless of the code’s efficiency. This in turn improves the performance of the processor.
>>
File: 1457455212451.png (1MB, 1214x778px) Image search: [Google]
1457455212451.png
1MB, 1214x778px
>>55711093
>Ultimately it remains to be seen just what the impact of this move will be. Hardware scheduling makes all the sense in the world for complex compute applications, which is a big reason why Fermi had hardware scheduling in the first place, and for that matter why AMD moved to hardware scheduling with GCN. At the same time however when it comes to graphics workloads even complex shader programs are simple relative to complex compute applications, so it’s not at all clear that this will have a significant impact on graphics performance, and indeed if it did have a significant impact on graphics performance we can’t imagine NVIDIA would go this way.

>What is clear at this time though is that NVIDIA is pitching GTX 680 specifically for consumer graphics while downplaying compute, which says a lot right there. Given their call for efficiency and how some of Fermi’s compute capabilities were already stripped for GF114, this does read like an attempt to further strip compute capabilities from their consumer GPUs in order to boost efficiency. Amusingly, whereas AMD seems to have moved closer to Fermi with GCN by adding compute performance, NVIDIA seems to have moved closer to Cayman with Kepler by taking it away.

important part here:
>NVIDIA is pitching GTX 680 specifically for consumer graphics while downplaying compute
>downplaying compute

it's also why in nvidia's "dx12, does and don'ts" they state not to run to many compute + graphic operations at the same time.
>https://developer.nvidia.com/dx12-dos-and-donts
>>
>>55711086
>muh shill paste
AMD has overhead even in vulkan.
>>
>>55711103
their hardware cannot handle it. while amd's gcn not only can, but shines brighter when its under heavy async load.

here's some more interesting reads on nvidia's async debacle:
>http://www.overclock.net/t/1606224/various-futuremarks-time-spy-directx-12-benchmark-compromised-less-compute-parallelism-than-doom-aots-also

yes its mostly focused on the time spy issue regarding their usage of async, but it does dwell into nvidia's architecture limitations.

also the use of the hardware scheduler is why amd gpu's used more power and ran hotter than nvidia's since the keplar and gcn 1 days. if nvidia slapped a hardware scheduler on pascal, their gpu's would not just use as much, but most likely use more than amd's since nvidia is on 16nm instead of 14nm like amd.
>>
>>55711110
>In the previous pages, we compared the performance of Rise of the Tomb Raider's original Direct 12 patch and the performance of the game's newest DirectX 12 implementations, seeing a higher minimum framerate performance in the majority of cases and improved performance in all cases for AMD's R9 Fury X GPU.

>Now we will compare the DirectX 12 and DirectX 11 versions of the game with this new patch, as while the DirectX 12 version has improved we need to know if this new version actually provides users better performance than what we can achieve with the older DirectX 11 API.

>With AMD's R9 Fury X we see a performance improvement when using DirectX 12 in all cases, whereas Nvidia's GTX 980Ti actually sees a performance decrease in all cases except 1080p performance, where we expect that the CPU performance benefits of DirectX 12 may have had more of a benefit than any potential gains in GPU performance.

>All in all it seems that those with AMD GCN 1.1 or newer GPUs will be better off playing Rise of the Tomb Raider in DirectX 12 whereas Nvidia users are better off using DirectX 11.

>http://www.overclock3d.net/reviews/gpu_displays/rise_of_the_tomb_raider_directx_12_performance_update/5

whats important to note is that rise of the tomb raider is a nvidia sponsored, and nvidia gameworks title. so yes, the 980 ti did come out ahead at 1080p, and can argue hurr dx12 don't matter, what's important to note how nvidia didn't benefit from dx12 at all, and in higher resolutions, suffered regressions.
>>
>>55708184
The 480 was O V E R H Y P E D

period
>>
File: LL.png (109KB, 623x900px) Image search: [Google]
LL.png
109KB, 623x900px
>>55711120
>http://www.pcper.com/reviews/Graphics-Cards/3DMark-Time-Spy-Looking-DX12-Asynchronous-Compute-Performance

when we take a look at time spy we can see some pretty interesting results.

when we look at the total % increase with async on & off one thing is made clear, amd wins hands down. even the humble $200 480 nets a higher increase in performance with async on than the 1080. maxwell flat out did not receive a boost at all.

there's a reason for that. according to pcper:
>Now, let’s talk about the bad news: Maxwell. Performance on 3DMark Time Spy with the GTX 980 and GTX 970 are basically unchanged with asynchronous compute enabled or disabled, telling us that the technology isn’t being integrated. In my discussion with NVIDIA about this topic, I was told that async compute support isn’t enabled at the driver level for Maxwell hardware, and that it would require both the driver and the game engine to be coded for that capability specifically.

which shouldn't come to a surprise, maxwell can't truly do async at all. its terrible at switching back and forth between compute and graphics as noted above. pascal does bring some improvements with this regard but there is more to the story.

the problem with time spy is that it doesn't fully take advantage with async. their designed async in that benchmark to the way nvidia stated in their "dx12 do's and don'ts."
>Try to aim at a reasonable number of command lists in the range of 15-30 or below. Try to bundle those CLs into 5-10 ExecuteCommandLists() calls per frame.

as noted above with the overclock.net link, time spy doesn't fully utilize async. it doesn't use a lot of it. it also doesn't use a lot of parallelism, meaning its not throwing out a lot of compute & and graphics operations at the same time. it feeds it mostly compute, sending a few compute operations at once, then switches to a little graphics, then back to compute. it does it in a way that doesn't over saturate pascal's dynamic preemption.
>>
File: domvulk490.png (637KB, 1920x1080px) Image search: [Google]
domvulk490.png
637KB, 1920x1080px
>>55711136
when we look at picture related with doom - vulkan you might notice something, the 1060 wins with weaker processors, but loses to the 480 with stronger processors.

with the way gcn works, it scales more with a stronger processor than a weaker one.

in doom - vulkan, async is enabled on amd cards and async is used HEAVILY in doom - vulkan. the older cpu's cannot feed the ace's and cu's fast enough. you still get a boost, but not as big. slap in a 6700k and, well in doom, it turns that $200 card into a $400 one. its able to keep up with the 480 and feed it plenty.

nvidia on the other hand doesn't have async enabled. id disabled it since it gives nvidia cards a regression and they're waiting for nvidia to release a driver to reenable async for nvidia cards so the only benefit nvidia is getting is the general less driver overhead. which is why nvidia gets a bigger boost with older cpus and not newer, stronger cpus. the older ones cannot keep up with the driver overhead, so switching over to dx12 frees up a lot of resources for older cpu's while the 6700k is strong enough that it doesn't matter so nvidia see's less of a boost.

thats why you'll notice the stronger the processor becomes, the less of a boost the 1060 receives, and the higher the boost the 480 starts to receive.

gcn is built to be fed, and utilize async. the more you feed it, the more powerful it becomes. give it a ton of things to do and it shine. vulkan / dx12 will always give amd a boost but the stronger the cpu, the more boost you'll get.

if you're building a pc now a simple 6100 is more than enough for a 480. if you're on a first generation i7, it be best to upgrade. regardless if its amd or nvidia. if you're on 2600k sandy, ivy, or even haswell, you'll fine and don't need to upgrade. you will see a stronger boost with the 480 than the 1060 in this title.

what i love about this one is that it shows these older processors bottlenecking the $200 480.
>>
>>55711120
also to note one of the reasons why nvidia still had a lead was due to not to much usage of async and high levels of tessellation.

dx12, and newer renditions of gcn have improved tessellation for amd, but nvidia still maintains a lead here.

maxwell simply cannot get into async at all. its preemption is terrible.

its why nvidia has kept their mouth shut with maxwell on async.

pascal does improve with dynamic load balancing preemption, but its a bandaid.
>>
>>55711150
>buy high end i7 with our poo in loo 480
kys
>>
>>55711172
also a way to help people understand the difference between amd's gcn architecture and nvidia's keplar and above architectures like pascal work is to think of it as a dual core (amd gcn) vs single core (nvidia keplar+maxwell) and with pascal, single core + hyperthreading.

its like using a dual core but running single threaded games on it. one core sat there going unused. the dual core is going under utilized. this was the case with dx11 titles.

with dx12 & vulkan, both cores can finally have the opportunity to be used if developers utilize it. tapping that second cores unleashes a ton of extra performance. nvidia is stuck on a single core design. pascal has hyper-theading but its no where close to the performance of a true dual core.

it might not be the best technical way to describe it, but gives you a rough idea.

pascal cannot handle to many ques of compute + graphics. it also can't handle that much single async of either compute or graphics. if we go back to time spy, again, if futuremark put in more async pascal would come to a crawl.

gcn does have a ceiling, but its extremely high compared to pascal.
>>
File: 1463180580467.jpg (204KB, 1091x774px)
1463180580467.jpg
204KB, 1091x774px
>>55711177
ignored
>if you're building a pc now a simple 6100 is more than enough for a 480. if you're on a first generation i7, it be best to upgrade. regardless if its amd or nvidia. if you're on 2600k sandy, ivy, or even haswell, you'll fine and don't need to upgrade. you will see a stronger boost with the 480 than the 1060 in this title.

didn't know i3 was highend i7. let alone i5 4670k was high end i7.
>>
>>55711200
or I can run my old ass cpu with 1060 and get better perforamnce
kys
>>
>>55711086
>>55711093
>>55711103
>>55711110
>>55711120
>>55711136
>>55711150
>>55711172
>>55711192

This is probably the biggest lowlife I've ever witnessed on /g/. You've been spamming shit same shit for days now and each time it's gets longer and longer. You're saying shit people already know about.
>>
File: pepediglett2.png (228KB, 755x755px) Image search: [Google]
pepediglett2.png
228KB, 755x755px
now i want to talk about ashes dx12 with the 480 vs the 1060.

its sorta the elephant in the room.

the 480 and the 1060 are neck to neck. the 480 appears to come out in the lead, but barely. 1 - 4fps. why is this?

well one thing to note is that ashes uses async HEAVILY, but more on the compute side than compute and graphics. due to this, part of gcn goes under utilized. the 1060 does receive a boost, all pascal cards do in ashes, but not that much. about 5ish percent overall.

gcn still handles compute only async extremely well. the nano fury for example nets a 12% boost with async in ashes.

so whats going on the with the 480?

well the 480 only has 40 compute engines total (36 cu's and 4 aces) while a card like the fury nano has 64 total. so being primary heavy on the compute side, the 480 will get a boost, but not as great as if it was able to handle graphics into the mix as well.

the 1060 gets a minor boost because it doesn't have to switch between graphics and compute at the same time much.

but, there could be something else wrong.
>>
File: 1468021976248.jpg (375KB, 720x1034px)
1468021976248.jpg
375KB, 720x1034px
>>55711287
as i was reading extremetech's article on doom and vulkan, i came across this bit:

>http://www.extremetech.com/gaming/231527-new-doom-update-adds-vulkan-support-amd-claims-substantial-performance-boost
>The RX 480 is just one GPU, and we’ve already discussed how different cards can see very different levels of performance improvement depending on the game in question — the R9 Nano picks up 12% additional performance from enabling versus disabling async compute in Ashes of the Singularity, whereas the RX 480 only sees a 3% performance uplift from the same feature.
>the R9 Nano picks up 12% additional performance from enabling versus disabling async compute in Ashes of the Singularity, whereas the RX 480 only sees a 3% performance uplift from the same feature.
>R9 Nano picks up 12%
>RX 480 only sees a 3%

something else is going on because even with lesser compute engines total, the 480 should be receiving a higher boost. 3% is incredibly low for gcn.
>>
>>55711275
Congratulations, you've spotted the payed shill.
>>
>>55711275
It's noon in india. Pajeet is at work.
>>
File: paidshill.png (423KB, 1920x1080px)
paidshill.png
423KB, 1920x1080px
>>55711305
yup im a paid shill.

look at the card amd gave me.
>>
File: smug horror.png (121KB, 354x347px) Image search: [Google]
smug horror.png
121KB, 354x347px
>>55711342
That just proves you're a paid one and not stupid to use amd hardware. Much like how AMD uses intel cpus for their internal tests.
>>
>>55711342
>hurr you still bought a 1080 gg
why shill for amd but own a 1080 you say?

i'm a dissatisfied customer. not only am i enduring this issue that ALL pascal cards face. yes, even the 160:

>https://forums.geforce.com/default/topic/951723/geforce-drivers/announcing-geforce-hotfix-driver-368-95/
>https://forums.geforce.com/default/topic/941579/geforce-1000-series/gtx-1080-high-dpc-latency-and-stuttering/
>http://www.overclock.net/t/1605618/nv-pascal-latency-issues-hotfix-driver-now-available

but also THIS issues that's been going on since the launch of pascal, two months now, and NVIDIA STILL CAN'T FIX IT.
>https://forums.geforce.com/default/topic/939358/geforce-1000-series/gtx-1080-flickering-issue/

i can't run to much stuff in the background or i start getting "lag" or audio distortions, i have to set my monitor to 60hz then back to 144hz on every boot up and keep my gpu running in high performance mode to help stop the flickering, and turn off a ton of 3d accelerated stuff like fancy desktop effects and 3d acceleration in my browser.

what's more sad is the hotfix DIDN'T FIX ANYTHING for the latency issue.

after reading around ocn and looking into the async stuff i'm just over nvidia.
>>
>>55711418
which is also why i'm selling my 1080 and picking up a 480 until vega comes out.

nvidia quaility has gone down the shitter and they had no vision. they downplayed compute. they didn't take the time to think about the future and just focused on dx11 and singular instead being innovative and trying something new.

oh and their "muh efficiency" is total horse (poo) because all they did to get that WAS TO REMOVE THE HARDWARE SCHEDULER because they wanted dx11 to stay the api of choice for the next ten years.

if the 1060 had a hardware scheduler it would probably have 5 watts more (155) tdp than the 480 (150) since its on 16nm instead of 14nm.
>>
File: lol.gif (2MB, 200x150px) Image search: [Google]
lol.gif
2MB, 200x150px
>>55711459
>i'm selling my 1080 and picking up a 480 until vega comes out
There's a limit to bullshit you know.
>>
File: 1461156770321.jpg (74KB, 739x600px) Image search: [Google]
1461156770321.jpg
74KB, 739x600px
>>55711485
its not bullshit

i would keep the 1080 till vega comes out but can you deal with FLICKERING EVEN IN THE DESKTOP every single day? can't even stream netflix to my tv and play a game without enduring audio cracking nonsense.

>hurr i have pascal not on my system
thats great, but you're still affected by it. run latencymon, nvidia confirmed it themselves everyone is affected by it. but it takes a certain amount of stuff running to get latency high enough. on some systems like mine netflix and a game is enough, others needs netflix, a game, twitch, and streaming something.
>>
>>55711342
>tfw amdrones are trying to copy me

Damn I didn't realise I hit a nerve.
>>
Nvidia waited for the 480 then blew it the fuck out by pricing the 1060 to match. Gotta say thanks Raj
>>
1060 is a full performance tier above the 480 once you OC it. it's closing in on 980ti/1070 performance. shame that pascal doesn't OC even higher, maybe volta will improve things on that front.

maxwell was also the same way. cards like 970 and 980 could punch well above their weight compared to the stock speeds.
>>
File: 1462660091644.png (91KB, 1716x441px) Image search: [Google]
1462660091644.png
91KB, 1716x441px
>>55711485
M8 they're all assmad that I keep reminding them that amd are shit and that as an amd owner I'd switch it for nvidia any day. They're trying to fight back with falseflagging but it's failing spectacularly.
>>
File: DSC_0788.jpg (2MB, 2304x1536px) Image search: [Google]
DSC_0788.jpg
2MB, 2304x1536px
>>55711572
you hit a nerve because this is why nvidia gets away with this nonsense. people downplay their issues it and look the other way.

>pic related
i upgraded to the 1080 from dual 980's

this entire year nvidia has been plagued by awful driver releases. i've used nvidia since 2004 with my fx5600. i had a few amd cards over the years, a 4850 and a 6950 and never had issues with them. but i never had issues with nvidia either. but always kept to nvidia because the entire industry always rat and raved about them.

but since windows 10 their quality has gone down the shit. from sli regressions to improvements, then random regressions again, to the black screens i kept getting back in january with the january driver.

and now this nonsense with the flickering and latency.

then most of all, i listened to the people that kept defending that maxwell can do async. now with time spy it CLEARLY shows maxwell cannot do async. it took them nearly a month to respond about the flickering, and similar amount of time about the latency. from nvidia lies, half truths, 970 3.5gb gate, and honestly, worse, silence about the issues. absolute silence. its pulling tooth and nails to get them to respond.
>>
>>55711654
they say amd has driver issues, but as of late with nvidia, amd can't be worse.

but at least amd has been more open over the past few years and their cards can actually do async.

and they have been more innovative than nvidia. if it wasn't for them, we wouldn't even have dx12 and vulkan.
>>
>>55708349
>i7-6700
>absolute top of the line i7
Wew lad it's like broadwell e is nonexistent lemme go trade my i7-6950 for a 6700
>>
>>55711654
>pic related

Atleast try and make it convincing like that Romanian guy or wherever he was from.

>having access to high end hardware =/= owning it
>>
File: 71448.png (22KB, 650x400px) Image search: [Google]
71448.png
22KB, 650x400px
>>55711682
single threaded performance m8

it still matters.

plus, dx12 scaling, more than 4 cores doesn't necessary add much improvement.

>http://www.anandtech.com/show/8962/the-directx-12-performance-preview-amd-nvidia-star-swarm/4

there are a few titles where 6 cores gain a boost, but then you run into another wall, where 8 cores+ doesn't continue to increase.
>>
>>55709887
10% at 60fps being 6fps typical discrepancy is relatively negligible
If it were at 144fps it would make more sense to consider it a big deal, but neither card can run 144Hz 1080p ultra minimum and neither can run that at 1440p ultra as a minimum.
Nothing wrong with the cards, they're just not that big of a deal. Waiting for vega to compare price wise to 1080 or 1080 Ti when it releases and if it lacks HBM2 or h/w level asynchronous compute I'll likely go AMD if they rock HBM2
>>
File: DSC_0789.jpg (437KB, 1304x1472px)
DSC_0789.jpg
437KB, 1304x1472px
>>55711711
its all true m8.
>>
File: DSC_0910.jpg (2MB, 2304x1536px) Image search: [Google]
DSC_0910.jpg
2MB, 2304x1536px
>>55711766
>>
>>55711766
You could have taken these pictures out of a guts thread, they don't prove shit.
>>
File: DSC_0909.jpg (2MB, 2304x1536px) Image search: [Google]
DSC_0909.jpg
2MB, 2304x1536px
>>55711773
some more. took these before i sold them.
>>
>>55709927
All AMD GPUs get better performance in DX12 from newer CPUs due to the CPU being able to feed more data to the GPU for asynchronous compute etc.
NVidia cards get less boost from CPU due to the lack of h/w level asynchronous compute
>>
File: DSC_0902.jpg (2MB, 2304x1536px) Image search: [Google]
DSC_0902.jpg
2MB, 2304x1536px
>>55711781
nope there all mine
>>
File: 1373591449824.png (93KB, 229x219px) Image search: [Google]
1373591449824.png
93KB, 229x219px
>>55711781
kek, oh so you sold them. Oh well that obviously explains why you don't have them anymore! Say, does your uncle work at Nvidia by any chance?
>>
>>55711766
>>55711773
>taking 50 different pics of your hardware

This is exactly how I know you don't own this shit. No one (right in the head at least) takes a mass of pictures of their 980 and shit. You've probably downloaded these pics from neogaf or some shit.

There's falseflagging then theres trying way to hard. I'll give you a 2/10 for the effort though.
>>
File: DSC_0903.jpg (1MB, 2304x1536px)
DSC_0903.jpg
1MB, 2304x1536px
>>55711795
>>
>>55711806
>>55711798
guess you missed the part where i said i sold them?

who knows, maybe i took pics to show to buyers.
>>
File: DSC_0906.jpg (2MB, 2304x1536px) Image search: [Google]
DSC_0906.jpg
2MB, 2304x1536px
>>55711812
>>
File: DSC_0709.jpg (2MB, 2304x1536px) Image search: [Google]
DSC_0709.jpg
2MB, 2304x1536px
>>55711824
some water cooling parts i sold

now i'm just posting photos i've taken of shit i use to have
>>
File: DSC_0908.jpg (2MB, 2304x1536px) Image search: [Google]
DSC_0908.jpg
2MB, 2304x1536px
>>55711837
>55711837
>>
>>55711812
So you sold your whole pc 6 months after buying all the components. Yeah nah.
>>
>>55711850
>>
>>55711797
It's blatant the he owns or works for a hardware retailer of some sort. It's easy to pick up high end hardware like this. I did a similar thing for my job experience when I was in school. Worked for some local pc repair shop/builder. Got to use the 780 ti like a week after it came out.
>>
>>55711876
>the butler shows pics of his mansion
>>
File: image.jpg (22KB, 325x252px) Image search: [Google]
image.jpg
22KB, 325x252px
>>55711177
>$300 CPU
>high end
Kys or go get a damn job nigger
>>
File: yup.png (36KB, 943x319px) Image search: [Google]
yup.png
36KB, 943x319px
>>55711854
>>
File: yup5.png (41KB, 743x497px)
yup5.png
41KB, 743x497px
>>55711920
>>
>>55711912
>i7
>480poo
kys
>>
File: yup1.png (40KB, 953x330px) Image search: [Google]
yup1.png
40KB, 953x330px
>>55711920

>>55711876
i own. i'm just wealthy.

ill post more of my shit like the 980 ti i had for my micro-atx build.
>>
>>55711904
Basically. He doesn't own any of this shit. Notice that the motherboards are different in some of the pics. He's just an internship who is posting from work and posting pics of hardware he's been using. No one in their right mind spends $1000 on sli 980s to sell them a year later and buy a $600 1080 to only sell that a couple of months later to buy a budget mainstream card.
>>
>>55711947
Nigga you ain't convincing anyone. It's obvious you own or work for a company and build pc's upon request.

Source: 4th uncle has the same home run business
>>
File: yuppppp.png (87KB, 771x749px) Image search: [Google]
yuppppp.png
87KB, 771x749px
>>55711948
>>55711948
>>
File: yuppppp1.png (24KB, 756x205px)
yuppppp1.png
24KB, 756x205px
>>55711979
>>
>>55711947
Nothing you post has any value, you're not convincing anyone. Post actual proof with a time stamp or fuck off.
>b-but I've sold them
sure you did, shill.
>>
>>55711979
Thanks for confirming >>55711876
>>
File: yuppppp2.png (108KB, 779x843px) Image search: [Google]
yuppppp2.png
108KB, 779x843px
>>55711995
>>55711990
well i guess i have since 2007
>>
>>55711979
Lol I can drive down to my uncles place an hour away and log into his pc to screen shot 10x the amount of i7 deliveries you have in this pic. Maybe I should go and quickly build a cf fury build and take pics to falseflag and post on 4chan whilst I'm there...

Hmm, I might do that in the future. Cheers for the idea.
>>
File: yuppppp3.png (86KB, 732x702px) Image search: [Google]
yuppppp3.png
86KB, 732x702px
>>55712017
oh 2008?
>>
>>55711920
>>55711934
>>55711979
>>55711989
>>55712017
>filenames
Do you have shill folder?
>>
File: yuppppp4.png (69KB, 743x505px) Image search: [Google]
yuppppp4.png
69KB, 743x505px
>>55712030
maybe 2010?
>>
File: yuppppp5.png (110KB, 741x876px)
yuppppp5.png
110KB, 741x876px
>>55712044
maybe it was 2011 when i replaced my 980x for a 2500k?
>>
File: yuppppp6.png (23KB, 739x157px) Image search: [Google]
yuppppp6.png
23KB, 739x157px
>>55712064
maybe 2012 when i replaced my 570 msi twin frozr for a 670 ftw?
>>
>>55711940
I don't get it, are you saying you want to throw $1200 into the DoD 8570.1 compliant (S) shredder for a Pascal Titan X brick on launch?
I'll stick with a cheap band-aid till all the major releases this cycle have happened and prices stabilize like someone who isn't retarded.
Just like why I'm not going to buy a brand new car, it's retarded, get it with 10k miles on it for 65% MSRP
>>
File: yuppppp7.png (50KB, 740x320px)
yuppppp7.png
50KB, 740x320px
>>55712077
or 2014 when i upgraded to haslel?
>>
>>55708184
Shills galore.. Same misleading image is posted every day.
>>
File: yuppppp8.png (34KB, 746x238px) Image search: [Google]
yuppppp8.png
34KB, 746x238px
>>55712091
i sure did love that 770 ftw
>>
>>55712064
A 980x overclocked to 4.5 performs on par with a stock 5820k.

You've made some extremely poor hardware decisions.
>>
File: yuppppp9.png (35KB, 919x227px) Image search: [Google]
yuppppp9.png
35KB, 919x227px
>>55712105
3.5gb get gud
>>
>>55709645
Don't forget gimped nvidia drives once the next gen cones to out.
>>
File: yuppppp10.png (50KB, 742x350px) Image search: [Google]
yuppppp10.png
50KB, 742x350px
>>55712115
eh wasn't as bad as when i decided to try faildozer.
>>
>Leave to go get some food, watch TV and a few youtube vids
>tfw this amdshill is still posting his business purchases 20 mins later

the desperation is REAL. nvidia have brought these people to do extreme things lmao
>>
File: yuppppp11.png (27KB, 918x217px) Image search: [Google]
yuppppp11.png
27KB, 918x217px
>>55712139
i just want to take a moment and talk about these flip flops.

they're simply amazing. i still have them today and they still feel and look brand new. they are extremely comfy.
>>
>>55712146

actually i was wrong. its been nearly 2 hours. jesus this guy is desperate
>>
>>55712146
I used to be sell parts as a side job and I would have *80s in quad sli. It was so fun pretending to be a richfag.
>>
>>55712164
autism is hard, isn't it dude?
>>
File: yuppppp12.png (28KB, 936x243px) Image search: [Google]
yuppppp12.png
28KB, 936x243px
>>55712164
i cannot destroy these sneakers. i try, and try, but they refuse to deteriorate. i've taken them into the ocean, the snow, and the high desert. but they refuse to deteriorate.

best $26 ever. would of paid $90.
>>
File: 1457417395425.png (41KB, 802x799px) Image search: [Google]
1457417395425.png
41KB, 802x799px
>>55708349
DELETE THIS
>>
File: yuppppp13.png (28KB, 925x219px) Image search: [Google]
yuppppp13.png
28KB, 925x219px
>>55712191
these were garbage. absolute garbage.
>>
>>55712172
>tfw I'm probably 100x richer (family) than this loner virgin trying to get friends on 4chan by showing his amazon purchases

feels good man
>>
File: yuppppp13.png (27KB, 933x235px) Image search: [Google]
yuppppp13.png
27KB, 933x235px
>>55712218
probably
>>55712211

this shit actually works amazing. really shines the tires nicely and adds a uv protectant. stays on well in raid. can go a week in rain before you need to reapply.
>>
>>55708349
Would a 6600K be able to handle driver overhead?
>>
File: yuppppp13.png (46KB, 932x394px) Image search: [Google]
yuppppp13.png
46KB, 932x394px
>>55712249
really, it works wonders. bought all that amazon had it stock.
>>
>>55712258
see
>>55711150
>>
>>55708349
I know 480 is CPU dependent, but are there any bench using curent gen or last gen i3 or i5, not those POS.

Hell there could even be an improvement for 480s if they used the e-broadwell models.

>>55712258
Im assuming any of the current gen processors can handle the 480 well enough.
>>
>>55709927
I'd grab the 480, personally. Your CPU won't be a bottleneck for a few years still, and honestly the 1060's gonna be a card you wish you hadn't bought a year from now.

AMD cards typically improve in performance as times goes by with driver updates, and the 480's set for Vulkan/dx12 at the hardware level while the 1060 is stuck doing it in software, and in fact isn't even doing async compute to begin with, but just quickly switching between compute tasks. It's going to be a lot more obvious how much of a difference this makes once some games come out that really stress async.

On top of that, the 1060 cannot SLI while the 480 not only has CF, but also gains significant performance improvements with the new vulkan/dx12 API's.

The 480 is about as futureproof as you could get out of a middle of the road GPU, and at a lower price.

Listen to the nvidia shills, but caveat emptor - we're talking about a company that's behind gameworks and the 970's gimped memory.
>>
>>55712258
its not driver overhead, its a pure bottleneck. the older processors from 2009 - 2010 cannot keep up with the 480.

any i7 from ivy and any i5 from haswell will be plenty. even skylake i3 can handle it.

2600k with a 4.4ghz+ should be able to handle it.
>>
File: yuppppp13.png (73KB, 943x461px) Image search: [Google]
yuppppp13.png
73KB, 943x461px
if anyone owns a roomba these batteries are great replacement ones. A LOT better than the standard ni-mh ones.

though i may be crazy, but all the stuff i wrote about pascal and nvidia with async is true. i'm not bullshitting. most of the stuff i wrote about i had links for proof.

if you want a card to last awhile it be better to grab a 480. its more future proof. not everyone is like I who can go around and buy new stuff freely.

i'm just shitposting now to shitpost on the shitposters who shitposted on me. and i'm bored.
>>
>>55711074
I'm going for a 480 and I've got an i5 3570K

I'm not worried, and nobody else should be either. The "budget cpus" that nvidiafags keep harping on about are literally 6 year old processors that sell for $50 now.

Would I spend $300 on a CPU and $200 on a GPU? Yes, I would. So would a lot of other people.

I don't think nvidia realizes though, that all these negative threads about AMD are actually promoting their cards.

You don't see 10 billion threads talking about the 1060 or even the 1070 or 1080, but you do see 10 billion threads talking about the 480. It's THE hot card right now.
>>
>>55712258
Always get the faster card at the price point you're willing to spend at. Future proofing is a meme. Just buy your shit now and get the best performance you can. Take this however you will.
>>
>>55712146
He's either some wanna be master trolle or desperate amdfag.
>>
>>55712344
i'm the crazy dude here.

its fun to kick your competitor in the stomach when they're the underdog. that and a lot of people enjoy threads like this to argue.
>>
File: gta.png (3MB, 1874x998px) Image search: [Google]
gta.png
3MB, 1874x998px
>>55712344

enjoy your slideshow performance in cpu intensive games. this was with a 970 as well which doesn't have shitty dx11 drivers and is actually faster than the 480 in this game too.
>>
My only question is how many games will actually use dx12 in the future
>>
>>55712411
is =! what's going on here >>55711150

amd does have driver overhead in dx11 titles thanks to way gcn works, but they do not suffer the same issue in dx12. there is some driver overhead, but no where close to the same as in dx11 for amd.

nvidia actually has more driver head in dx12 than amd thanks to nvidia using software schedulers.
>>
>>55708184
480 did just fine. Naturally it will be very easy to make a better version if it comes out later since you can tune your product to be slightly better than the competitors. It's nvidia who fucked up because they're charging more for being slightly better, when they could've charged the same price.

Reviews for the 480 4gb nitro are already out, it does slightly better than reference 1060, for $30 less.
>>
>>55712442
>all consoles running dx12 or vulkan
>all phones running vulkan
>all source engine games running vulkan
>all new games running dx12 or vulkan
>valve promoting vulkan hard as fuck because they need to avoid microsoft lockins
>linuxfags promoting vulkan because it's open source
>vulkan running on goddamn near everything also makes it easy to port between goddamn near everything

The answer is that literally every game that goes into development will now create either a vulkan or a dx12 version, some might do both but I imagine most will just do vulkan.
It's going to be yuuuge.
>>
File: 1467374122669.jpg (559KB, 1100x1002px) Image search: [Google]
1467374122669.jpg
559KB, 1100x1002px
>>55711275
>pajeet is triggered
>>
>>55712465
>Reviews for the 480 4gb nitro are already out, it does slightly better than reference 1060, for $30 less.

you mean it matches the 1060 for £40 more.
>>
>>55712465
>>55712499
Shit man, in my shithole ASUS reference 480 is more expensive than a palit superjetstream 1060

Its like Im forced to take Nvidia every single time, no point in getting AMD
>>
>>55712486
>all consoles running dx12 or vulkan
are you high? no console uses vulkan.

>all phones running vulkan
wrong again

>all new games running dx12 or vulkan
also wrong. there are a bunch of games i can list off the top of my head which haven't said anything about vulkan/dx12 support and that probably won't have it.

>The answer is that literally every game that goes into development will now create either a vulkan or a dx12 version

and dev cycles last at least 3 years. dx12/vulkan won't be the norm till at least end of 2018.
>>
>>55712523
I have a 480 and honestly you're better off getting the 1060

I wish it wasn't so but this thing is a fucking turd, I didn't know I needed to also buy a top of the line cpu in order to max out games at 1080p, making this "budget card" a fucking horrible deal.
>>
>>55712526
are you high? vulkan and dx12 are 90% identical. actually, one can say 95% identical. they are awfully similar to one another.

i mean come on, they're both based off of amd's mantle.
>>
>>55712534
what cpu m8?
>>
>>55712523
amd fucked their prices outside of usa. the aib 1060s are cheaper than the reference 480s.
>>
>>55712557
>vulkan and dx12 are 90% identical
Do you have DX12 source code? Both apis porvidie low level access yes. Working the same nope.
>>
>>55712557
>vulkan and dx12 are 90% identical. actually, one can say 95% identical

HAHAHAHAHAHAHAHAHAHAHAHA

>they're both based off of amd's mantle.

HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA
>>
>>55712571
I own an i3 4150, it's nothing special
>>
>>55712586
>Do you have DX12 source code? Both apis porvidie low level access yes.

>source code for an '''api'''

are you retarded?
>>
>>55712557
>vulkan and dx12 are 90% identical

That's like saying OGL and DirectX are 90% identical because they're both API. Neck yourself.
>>
>>55708184
>here's lets compare a whole bunch of games we've made up to date drivers for

why is the whole benchmarking "tech review" industry such cancerous shills? they literally all just copy eachother and never actually try to bench obscure shit
>>
>>55712526
>all phones running vulkan
hes right, for android. google is adopting vulkan to replace opengl for android.
>>
>>55712344
>Would I spend $300 on a CPU and $200 on a GPU? Yes, I would. So would a lot of other people.

you're delusional. on /g/ and leddit the idea of buying an i7 for a gaymen machine is usually laughed at. yet in builds with an AMD GPU not getting an i7 is a huge detriment to real world performance.
>>
>>55712534
tfw I am the niche that has a top end CPU but don't care for $400+ graphics
>>
>>55712605
no he's not. android =/= all phones. iphone uses metal and god knows what shitty windows phone uses but not all phones uses vulkan so he's wrong.
>>
>>55712486
>>all phones running vulkan

only the newest phones which 0.1% of the market owns, and tilers are completely different design wise than a modern simt dgpu.

>>all source engine games running vulkan

you clearly haven't seen how terrible valve's vulkan renderer is, it diminishes performance and causes horrible instability/stuttering. and they've been working on it for *years* at this point.
>>
Anyone have experience with galax cards? Reviews of 970s and 980s seem positive but none for the 1060 so far
>>
>>55712587
they are

>http://www.pcworld.com/article/2894036/mantle-is-a-vulkan-amds-dead-graphics-api-rises-from-the-ashes-as-opengls-successor.html

and microsoft adopted A LOT of features from amd's mantle, and much of how they designed it was based off the way mantle did it.

also according to developers, like bethesda:
>Moving forward to the modern graphical APIs, Vulkan and DirectX 12, Bethesda does state that both APIs operate in a very similar way and "inherited a lot from AMD's Mantle API efforts" making both APIs fairly similar to work with.
>http://www.overclock3d.net/news/gpu_displays/id_software_explains_why_they_chose_opengl_vulkan_over_directx11_12/1


> DirectX 12 and Vulkan are conceptually very similar and both clearly inherited a lot from AMD’s Mantle API efforts. The low-level nature of those APIs moves a lot of the optimization responsibility from the driver to the application developer, so we don’t expect big differences in speed between the two APIs in the future.

>On the tools side there is very good Vulkan support in RenderDoc now, which covers most of our debugging needs. We choose Vulkan, because it allows us to support Windows 7 and 8, which still have significant market share and would be excluded with DirectX 12.

>On top of that Vulkan has an extension mechanism that allows us to work very closely with AMD, NVIDIA and Intel to do very specific optimisations for each hardware.
>>
>>55708297
>>55708316
>>55708327


>he fell for the vulkan meme
>>
>>55712686
vulkan meme looks pretty fucking good to me

enjoy your dx11 only games I guess
>>
>>55712623
>no he's not. android =/= all phones.
>hes right, for android. google is adopting vulkan to replace opengl for android.
i guess you didn't read me stating, for android phones.

no shit android isn't all phones, but hes right for all android phones once google adds in vulkan support.

>https://github.com/googlesamples/android-vulkan-tutorials
>http://arstechnica.com/gadgets/2015/08/android-to-support-vulkan-graphics-api-the-open-answer-to-metal-and-dx12/

they plan on switching android over to vulkan.

so far qualcomm snapdragon chipsets are supported. nvidia tegra isn't ironically due to hardware limitations. nexus 9 was going to get support but its been delayed.
>>
it's a shame that every dx12 and vulkan game so far has favored one brand and been clearly sponsored by nvidia or amd.

these new APIs were supposed to bring nvidia and amd closer to 50/50 and remove shady performance crippling that both sides have been engaging in, instead it's become way more egregious. pretty much one step closer to amd/nvidia exclusive games at this point.
>>
File: amd.jpg (138KB, 653x726px) Image search: [Google]
amd.jpg
138KB, 653x726px
>>55708349
DEL
>>
>>55712668
see >>55712599
>>
https://www.alza.sk/gainward-geforce-gtx-1060-d4369323.htm?o=4

https://www.alza.sk/xfx-radeon-rx-480-custom-backplate-oc-8gb-ddr5-d4349197.htm?o=2

>amd

not even once
>>
>>55712716
not all qualcomm. if they support at least opengl 3.1 they have vulkan support.

whats going on the tegra is unknown. they don't know if its a hardware issue or what. or google just doesn't want to support the nexus 9.

so phones from the last two yearsish should support it and all future phones.

with how many people upgrade their phones vulkan adoption in the android market will grow pretty rapidly.
>>
>>55712729
see
>>55712668

vulkan and directx 12 are closer to each other than opengl and directx was.

they're awfully similar to one another this time around.

not saying there are not differences, there are, but compared to opengl vs direct x 11 and below, its A LOT less differences.
>>
>>55712686
that's a fake benchmark only shills keep posting, it's the same type of bullshit that showed reference 480s ocing to 1500mhz

there's already reviews out that show the nitro+ barely gets a stable oc of 6%
>>
>>55712724
The only brand they favour is whoever can do async compute better. Right now that's AMD, in part because Nvidia gave the finger to hardware support of async compute and instead use a serialized process that flips between compute and graphics as quickly as it can.

This has very obvious limitations if anything tries to do much simultaneous compute and graphics, and this is where nvidia's cards will show their age while AMD will be rocketing ahead.

AMD still needs better "single core" performance, so to speak, but they've really outmaneuvered nvidia on this one.
>>
https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1060/29.html
>10% OC
>13% perf gain

https://www.techpowerup.com/reviews/AMD/RX_480/27.html
>5% OC
>5% perf gain

>pascal has inferior ipc they said
>pascal doesn't gain anything from overclocking they said
>>
File: amd.png (42KB, 653x726px)
amd.png
42KB, 653x726px
>>55712839
DELETE
>>
>>55712839
lolwut

VRAM OC on the 480 gives a 5-10% improvement by itself
>>
>>55712835

async compute has less to do with it than people realize. pretty much every game that has seen gains from 'async compute' has been AMD sponsored or extremely buggy. games like hitman even reduce image quality on AMD hardware to the point where textures are 1/8th their original resolution.

on the other side of the coin we have RotR which is stupidly NVIDIA favoring. devs should reject 'cooperation' with either of these companies outright.
>>
>>55712835
>The only brand they favour is whoever can do async compute better.

are you fucking retarded in the head? this is some next level bullshit you're spewing. they favor whoever fucking pays their wages in game sponsorships.
>>
>>55712864

OCing memory on a reference 480 brings it back to out of spec power draw.
>>
File: 1433953992232.jpg (26KB, 480x542px) Image search: [Google]
1433953992232.jpg
26KB, 480x542px
>>55712864
>memory overclock
>increasing fps by 10%
>>
>>55711120
>OC3D
Into the trash it goes

Why do people take Tin shill Logan seriously?
>>
>>55712724
actually, even the ones that favor nvidia like tomb raider, amd still receives a higher boost.

>>55711120
see this shill

the only reason why nvidia pulled ahead because it did provide async support, but it didn't use that much of it, it was done in a way not to bog down maxwell and now pascal preemption, and the use of the number one thorn in nvidia's side, heavy tessellation.

nvidia is terrible with async
amd is terrible with tessellation. well, shouldn't say terrible, not as terrible as nvidia. at least both have hardware support for tessellation. nvidia doesn't with async. nvidia hardware is just better at tessellation.

tessellation becomes less of an issue if the game utilizes async more heavily though for amd.
>>
>>55712891
>reading the shill spam tl;dr pasta
really
>>
>>55712893
>actually, even the ones that favor nvidia like tomb raider, amd still receives a higher boost.

doesn't matter how much amd 'gains', they're still 30% behind because of the gimping. the same shit happens to nvidia hardware in AMD sponsored games as well.
>>
>>55712865
this is absolutely false.

maxwell literally has no async support. every single nvidia architecture since keplar lacks a hardware scheduler which is needed by async to perform optimally.

pascal only bandaids this issue, but with heavy use of both compute + graphics its bog down and brought down to maxwell levels.

time spy shows it clearly that nvidia async is pretty terrible, and thats with time spy being coded in a way that doesn't hinder pascal's preemption. not to many ques, mild amount of compute work, very little graphics. even then, amd's gcn across the board showed higher gains.

maxwell gained zero.
>>
File: 1332832649979.jpg (34KB, 303x404px) Image search: [Google]
1332832649979.jpg
34KB, 303x404px
>OC3D

The amount of shilling in this video

https://www.youtube.com/watch?v=OaOiFINgwyM

>The amount of shilling in the first few posts of this thread

http://forum.overclock3d.net/showthread.php?t=76984


Reminder to always ignore OC3D shill posts.
>>
>>55712918
>mild amount of compute work

so it's like 99% of games then, most games don't even use any compute shaders (and therefore would not even be able to benefit from '''async compute'''). yet in those games we still see huge performance swings torwards the brand the sponsored them, a trend that started when AMD gimped NVIDIA hardware in tomb raider 2013.
>>
>>55712942
most aaa games do. in fact, probably all of them.
>>
>>55712839
i can't oc more than 2038mhz on my gtx 1060 and 450mhz memory
>>
>>55713069
so that test is worthless
>>
>>55712865
Hitman is a fucking shitty optimized game. I can barely hold 60 fps at medium settings on my 980. I regret not buying it for xbox.
>>
>>55712942
only dx12 & vulkan utilize async.

what hurt amd in dx11 was heavy use of tessellation. we can talk about driver overhead, but ultimately it was tessellation that hurt them the most. the games where amd lost badly where heavy tessellated games.

one of the reasons why amd hasn't done well compared to nvidia in 3dmark - firestrike is the heavy use of tessellation.

which is pretty ironic when they go and state "well we did it this way because its fair and we are not forced to use async in a heavy parallel way!!!!"

the only reason one would use little, or very one sided (mostly compute) is either because the game is very heavy on the compute side, or favoring nvidia.

only reason why one would use a lot of async is because you want MORE performance or want to shit all over nvidia.

not disagreeing with your assertions, dx12 until nvidia fixes their shit, will come down to heavy async - amd, little async - nvidia.

whats sad, nvidia is holding dx12 and vulkan back since they're terrible with parallel.
>>
>>55708349
1060 beats 480 ATM in general, but holy shit is that graph kiked.
>>
File: timespynvidiasponsored.png (927KB, 950x566px) Image search: [Google]
timespynvidiasponsored.png
927KB, 950x566px
>>55713107
>or very one sided (mostly compute)
this isn't bad. amd still benefits greatly with any async load. benchmark after benchmark has shown this. they always receive higher gains than nvidia. maxwell zero, pascal small boost. but yes, it does "benefit" nvidia because pascal's updated preemption can handle it better. its not having to switch much. so nvidia won't show regressions. though amd still had the highest gains even when done that way compared to nvidia.

>one of the reasons why amd hasn't done well compared to nvidia in 3dmark - firestrike is the heavy use of tessellation.
one of futuremarks biggest sponsor is galax. they only produce nvidia cards. that should make it clear which side futuremark is on. early time spy screens even showed a sponsored nvidia symbol.
>http://videocardz.com/57941/first-footage-from-3dmark-time-spy-benchmark-for-directx-12-api-emerges

pretty funny actually.
>>
>>55713107
>processor that easily handles thousands of threads at once
>terrible with parallel
>>
>>55711781
You're generating a fuckload of butthurt. Good on ya mate.
>>
>>55711682
6700 is actually still king of the single thread. The other ones only murder it in multithread and why would that affect a benchmark involving a gpu
>>
>>55713157
I hope you are aware this was an event sponsored by nvidia and the bezel of the screen had the event logo and nvidia logo on it, it's not part of the video itself.

Also AMD officially endorsed Timespy.

http://radeon.com/radeon-wins-3dmark-dx12/

I guess AMD is confirmed Nvidia shill?
>>
>>55708994
I'm still using an i7 870.

Literally all I do on my desktop is video games, why should I waste money on such a degenerate hobby? Work on laptop, play on desktop.
>>
File: max.png (106KB, 910x517px) Image search: [Google]
max.png
106KB, 910x517px
>>55712918
>every single nvidia architecture since keplar lacks a hardware scheduler

are you blind?
>>
>>55713268
Those two things are not even comparable to each other.

The Gigathread Engine was and still is vastly better than the Warp Scheduler.
>>
>>55713286
lol those are both pictures of maxwell
>>
>>55713286
The gigathread engine is a maxwell feature you turd
>>
File: Wew.png (2MB, 1920x1080px)
Wew.png
2MB, 1920x1080px
Wew
>>
>>55713286
>clueless amdrone
>>
>>55712344
See >>55713313

Enjoy that driver overhead :^). Imagine how much further behind the stock 1060 the 480 would be if it wasn't OC to the max.

>maximum OC 480
>9 frames behind stock 1060
>>
File: patchy4.png (275KB, 600x600px)
patchy4.png
275KB, 600x600px
>>55713313
>no temps for 480
>>
>>55713355
the issue in that pic isn't even driver overhead. look at the differences in gpu utilization.
>>
>>55712791
That's just the factory OC.

If you're talking about the eteknix OC review, the guy didn't increase VDDC from 1.15V so he got stuck at 1360MHz. This is seen in the gpu-z screenshots. He thought he was increasing voltage, but ended up running without increase. This is because wattman doesn't adjust gpu voltage while vram is set to auto.
>>
>>55713370
There's a problem right now on msi afterburner with the 480 and temperature measurement. It just doesn't show up on the overlay. Many youtubers are having it.
>>
>>55708184
>4th class nvidia card DESTROYING AMD FLAGSHIP

AMANLETD WHEN WILL THEY LEARN
>>
>>55713399
>AMANLETD
AYY LMAO
>>
>>55713376
It's 98-100%. The first percentage is fan speed. The 2nd percentage is utilisation I believe except with the 480 where there's only gpu utilisation percentage showing anyway.
>>
RX 480 = HD 4870
RX 470 = HD 4850

It's stupid to buy a HD 4870 when the HD 4850 is only clocked slightly lower but much cheaper.
>>
>>55713420
ah, well it's still not driver overhead if all the cards are maxing out.

the 480 is probably just choking more on all the geometry
>>
>Custom RX 480 starting at 299€
>Custom GTX 1060 starting at 279€

Who would pay more for less performance?
AMD dropped the ball on the European midrange market.
>>
>>55713489
dumb nvidiot
480 is cheaper
>>
>>55713489
>can not sli
>not futureproof
>>
>>55713510
>sli
>3fps increase
>200% more crashes
>>
>>55710212

the cheapest non ref 1060 here available here in ass end Philippines is $270 while the most expensive non ref is $317.

so about $20 on tax. though no doubt some retailers would charge more depending on the brand.

while the lowest RX 480 available is at $250 while the most expensive $347.
>>
>>55713507
See >>55712739
>>
>>55713510
>thinking the turd 480 is future proofed because of dx12 meme

even if it's more competitive in dx12 the card is still too weak to be able to run newer dx12 games a year or so down the line, the card was made for 1080p overwatch since that's about all it can do, especially if you don't have a 12 core i7
>>
>>55713424
When is the RX 470 supposed to hit the market?
I'm getting pretty tired of the waiting game so I might just bite the bullet with a 4GB RX 480 or 1060
>>
File: nvidia_amd_performance.jpg (41KB, 570x341px) Image search: [Google]
nvidia_amd_performance.jpg
41KB, 570x341px
>>55708184
nvidia doesn't care about DX12 or async or Vulkan. Current nvidia cards may have lead now but AMD's cards will age much better and by the release of the next gen, nvidia's top end will be licking the foot of amd's lowest end
>>
>>55713767
lol!

so not only does the 480 have 2x the theoretical power of the 1060, but atm amd drivers are only able to get half of it out of the gpu?
>>
>>55713622
I'm honestly half tempted to just get a 460 or 470 desu

Either one will probably be better than my gtx 660 by like 50% or more

and it'll tide me over until the zen and 490 launch...

I'm pretty tempted, but I also feel like I ought to wait until late october before settling on a purchase decision.
>>
>>55713788
Courtesy of nVidia GimpWorks drivers and thanks to DX12 and Vulkan API
>>
File: Untitled.png (26KB, 1102x677px) Image search: [Google]
Untitled.png
26KB, 1102x677px
Why's there so few benchmarks being done on the 1060?
>>
>>55713931
probably because the people who buy them play games or something
>>
>>55713931
Because RX480 is the meme card. It's slightly lower priced so more budget people go for it.
>>
>>55708879
i can't find a clear explanation of driver overhead, would you enlight me?
>>
>>55713931
It's cheaper, and better value for the money, particularly with long-term taken into account.

1060 might not even reach 50% of the 480's marketshare, which I figure will approach 2% over the next couple months.
>>
>>55713977
It's basically the work that the CPU has to do in order to allocate the GPU work. This is greatly reduced in Vulkan and DX12. AMD's gpu driver overhead in OpenGL and DX11 is worse than that of NVIDIA.
>>
File: 1070 480.jpg (223KB, 1442x874px)
1070 480.jpg
223KB, 1442x874px
>>55713931
1060 came out 4 days ago. RX 480 came out 3½ weeks ago. Check 1070 and what do we see? Overwhelming dominance. Even 1080 is more common than RX 480.
>>
>>55708331
>even if they're competitive, they'll still sell many millions less than nvidia.
This is not AMD's fault. Blame the kids who stubbornly throw money at Nvidia.
>>
>>55708349
>4.5 Ghz
>2.7 Ghz
>3.2 Ghz

Nice. Now why don't you prop them up to 3.8 Ghz on both and see where they stand.
>>
>>55709797
#1, I wonder why they excluded the 4GB RX480.

Coincidence or planned?
>>
How to make proper benchmark?

Get top 15 popular games on Steam. Bench them.
>>
>>55714382
They haven't reviewed the card.
>>
>>55714397
So you want games like DOTA 2, CS:GO, Rocket League, TF2, Garry's Mod and Football Manager 2016 to fill the benchmarks?
>>
File: asd.jpg (124KB, 500x1170px)
asd.jpg
124KB, 500x1170px
>>55714425
They certainly did.

I suspect the numbers would be at around 130%-135% if RX480 4GB was posted.

This would put 480 in a comfortable first place.
>>
>>55714459
No, they haven't. Do you see any performance results for the 4 GB model in that review?

He just calculated the price/perf for a $200 480 assuming that the performance of the card would be the same as the 8 GB model's.
>>
>>55714450
This would give a more accurate picture for majority of the people. Those games that still have huge player base and ownership rate, the reviews would help out to the largest amount of gamers.

However problem I see is, it wouldn't show the true power of the cards.
>>
>>55714487
Most of the people who play those games don't give a shit about GPU benchmarks. The games run on toasters, and they play the games on their toasters. They are not the target audience of the reviews.
>>
File: perfdollar_1920_1080.png (43KB, 500x1210px) Image search: [Google]
perfdollar_1920_1080.png
43KB, 500x1210px
>>55714459
Oh, and even though GTX 1060 can be easily had for $250 (as long as you're willing to wait a day or two for them to get stocked), TPU uses a $300 price tag for it in their other reviews.

Stop with your fucking victim complex. The whole world isn't in a conspiracy against you. You're like a goddamn feminist with their patriarchy.
>>
>>55714540
You're going on a non-sense rant. Seems like you are suffering from some delusion.

Other GTX1060 sell for $300 msrp. It would be accurate to judge the price/performance within that range.

Seems like your hatred for AMD has clouded your mind.
>>
>>55714450
Csgo de_nuke is a good benchmark, I play csgo at 300 fps min and that map does 120 so it would be good if there are benchmarks for it
>>
planetside 2 should be a benchmark desu

it's also a gameworks title :^)
>>
>>55709222
>Polaris 10, 5.7bln transistors and 232mm2
>GP106, 4.4bln transistors and 200mm2
Now which one is cheaper to make?
>>
>>55714996
GP106.

Smaller die means more volume.

Polaris cost more to make than GP106 thats for sure.

However marketing value is reversed. If Nvidia were to price it according to what their chip cost, GP106 would cost $199, a real value.
>>
>>55713767
>will age much better
Sure if you upgrade to the new i7 every year. 300 dollars to Shintel every year isn't worth it for me personally.
>>
>>55715056
Exactly, so it's reversed, AMD needs to sell their Polaris chips more than Nvidia needs to sell their GP106 to earn the same profit.
>>
>>55708184
>Slightly better for anywhere from 15 to 35% more money
Yeah, Nvidia sure is killing their better bargain competition
>>
Which GTX 1060 should I order?:

1) Zotac GTX 1060 Amp Edition 6GB - £239.99
2) EVGA Superclocked GTX 1060 6GB - £259.99
2.5) EVGA GTX 1060 6GB - £241
3) Palit GTX 1060 Dual 6GB - £245.99

I don't want to spend more than £260 so the better non-ref models aren't being considered.
>>
>>55715397
I'd get the Palit because it looks like has the best cooler.
>>
>>55712534
I also have a RX 480 and I disagree. It's a great card. 1060 is objectively better, but it doesn't make the 480 a "fucking turd". It provides exactly as much performance as AMD promised (not as much as it was hyped to provides, but that's a topic for another time) and is decently priced. It's not loud, even with the stock cooler. It's not just my opinion, I asked 4 different people and they all said that it's not loud. It's a card worth considering, especially when you are building a new PC with a modern CPU.
>>
>>55715397

* Also the Zotac is £259.99, oops.

>>55715414

Ah ok thanks - I just wasn't sure about Palit as a brand, I will obviously look up reviews etc. as more surface but I always like to get opinions too.
>>
>>55714610
Are you fucking retarded? You managed to completely miss the point.

I don't give a shit what they use as a price tag. You were the one complaining about 4 GB 480 not showing up in the chart, implying that it's because of some nvidia bias they have, and then managed to be completely wrong about whether they reviewed the card.

My point was that if they had an nvidia bias, why would they use a 300 dollar price tag for 1060, when using 250 would be completely acceptable, and no one but a complete amdrone would have a problem with it.

And thanks for proving my point about having a victim complex, since you had to frame the situation in the way that I hate poor AMF.
>>
File: 1469279416842-b.jpg (34KB, 400x381px) Image search: [Google]
1469279416842-b.jpg
34KB, 400x381px
>>55708184
No sli nuff said.
>>
>>55716016
Why would you buy 2 1060 for the price of 1 1080?
>>
>>55715974
They used the $249 price tag though. So I don't get what you're arguing about. Yes they've reviewed $300 1060 variants. But they reviewed the $249 version as well. My question was why didn't they include the $199 RX480 in the $249 version review? They had the data (you said they didn't, wrong again). So it seems like you're living in some weird fantasy realm driven my delusions and weird conspiracy.
>>
>>55716132
MUH SLI
>>
>>55716168
Holy fucking shit. You can't be this stupid.

I posted a review that came after the 1060 review, which was the 480 strix. Take a look at the position of 1060 there. Hint: it's not at the top. Why isn't it at the top? Because they use a price tag of 300 for it. The only place where 250 1060 is listed is the 1060 review, similarly how the 200 480 can only be found in the original 480 review. Got it?

You claim that they have performance data for the 4 GB 480. Go to their game benchmarks for me, the ones showing actual fps. Can you find the results of the 4 GB 480 there? How about the relative performance chart? Can you find it there?

In fact, you can go to the conclusions page where he SPECULATES about the performance of the 4 GB variant. Why would he speculate if he had tested it?

I already explained why the price performance chart has a 200 480 in it. You can go read my previous post for the explanation, though I'm not sure how much good that will do considering you seem to not be able to read in the first place.
>>
>>55709451
Something is off here.

I have a 290x so I'm looking at that. And every test the 290x gets higher fps at 4k than it does at 1440? Wut?
>>
>>55716814
The only game where that happens in those links is asscreed. In FO4 it goes from 65.5 to 39.3, in Crysis 3 from 30.7 to 21.4, etc. Anno is kind of a close call.

May have something to do with different levels of AA.
>>
>>55716863
Noticed that after I read the entire article. But based off that, I'm happy with my 290x because I play at 1440. Think I'll hold out another year or two.
>>
>>55714339
No, it's amds fault.
>>
>>55713268
see
>>55711093
>>55711103

and see:
>http://www.guru3d.com/news-story/nvidia-will-fully-implement-async-compute-via-driver-support.html
>http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/2170
>The Asynchronous Warp Schedulers are in the hardware. Each SMM (which is a shader engine in GCN terms) holds four AWSs. Unlike GCN, the scheduling aspect is handled in software for Maxwell 2. In the driver there's a Grid Management Queue which holds pending tasks and assigns the pending tasks to another piece of software which is the work distributor. The work distributor then assigns the tasks to available Asynchronous Warp Schedulers. It's quite a few different "parts" working together. A software and a hardware component if you will.

>With GCN the developer sends work to a particular queue (Graphic/Compute/Copy) and the driver just sends it to the Asynchronous Compute Engine (for Async compute) or Graphic Command Processor (Graphic tasks but can also handle compute), DMA Engines (Copy). The queues, for pending Async work, are held within the ACEs (8 deep each)... and ACEs handle assigning Async tasks to available compute units.

.Simplified...

>Maxwell 2: Queues in Software, work distributor in software (context switching), Asynchronous Warps in hardware, DMA Engines in hardware, CUDA cores in hardware.
GCN: Queues/Work distributor/Asynchronous Compute engines (ACEs/Graphic Command Processor) in hardware, Copy (DMA Engines) in hardware, CUs in hardware.

and funny thing, to this day, nvidia has never enabled async support on maxwell.
>>
File: 2573606.png (100KB, 2390x644px) Image search: [Google]
2573606.png
100KB, 2390x644px
>>55717856
>https://forum.beyond3d.com/threads/dx12-performance-discussion-and-analysis-thread.57188/page-28#post-1870218

>As soon as dependencies become involved, the entire scheduling is performed on the CPU side, as opposed to offloading at least parts of it to the GPU.
>If a task is flagged as async, it is never batched with other tasks - so the corresponding queue underruns as soon as the assigned task finished.
>If a queue is flagged as async AND serial, apart from the lack of batching, all other 31 queues are not filled in either, so the GPU runs out of jobs after executing just a single task each.
>The graphic part of the benchmark in this thread appears to keep the GPU completely busy on Nvidia hardware, so none of the compute tasks is actually running "parallel".

>I wouldn't expect the Nvidia cards to perform THAT bad in the future, given that there are still possible gains to be made in the driver. I wouldn't exactly overestimate them either, though. AMD has just a far more scalable hardware design in this domain, and the necessity of switching between compute and graphic context in combination with the starvation issue will continue to haunt Nvidia as that isn't a software but a hardware design fault.
>>
>>55711150
i have one question, say like you were on a midrange i5 and a 480, would the 1060 still surpass it even in dx 12?

For example I have a 4690k
>>
>>55718008
The 480 gains massively from hyper threading. I5's don't have hyper threading so I'm not sure what performance gains you'd get. It won't be anything as big as in that pic with the i7 though.
>>
>>55718007
Pascal does not support Asynchronous compute + graphics but contrary to Maxwell... Pascal added improved preemption as well as a more refined load balancing mechanism. So what Pascal can do is execute a Compute and Graphics task in serial (Software based scheduling) and assign these tasks to two separate GPCs. So while one GPC is handling the Compute task... another handles the Graphics task. While one GPC is filled with Compute work... it cannot do Graphics work and vice versa. The kicker is that the improved preemption allows Pascal to flush a GPC of work quickly (something Maxwell did not support) allowing a GPC to move from a Compute task to a Graphics task more quickly than Maxwell (faster context switching). This takes CPU time (the scheduler is software based and thus takes CPU time) and you are also limited by the number of GPCs on the GPU. If the workload becomes too heavy then you run out of GPCs and a performance hit ensues.
>>
>>55718008
you're 4690k will be fine. its why i even recommend a i3 6100.

as long as its not toast tier processor from 2009 you will be fine.

notice how even the 2010 processor they tested, the i5 750, scores higher than the x4 955? any processor from ivy and up will have zero issues.
>>
>>55718057
Yeah, so my point is I wonder would the 1060 still beat it?

Most likely yes, but I'd like to find some proof before doing buying either, or going what is for me full batshit and gettign a 1070.
>>
>>55708184
what i dont get is why people buy a new gpu to play old games.

most, if not all new games will use dx12 or vulkan. buy a card that works best with dx12 and vulkan.
>>
File: gpu.png (161KB, 837x450px) Image search: [Google]
gpu.png
161KB, 837x450px
Finally pulled the trigger and got the Palit GeForce GTX1060 Dual for 279€.
How bad did I fuck up?Any reviews out for it? I can't find anything on this card.
>>
File: 2828686.jpg (16KB, 635x357px) Image search: [Google]
2828686.jpg
16KB, 635x357px
>>55718089
The first feature nVIDA introduced is improved Dynamic Load Balancing. Basically.. the entire GPU resources can be dynamically assigned based on priority level access. So an Async Compute + Graphics task may be granted a higher priority access to the available GPU resources. Say the Graphics task is done processing... well a new task can almost immediately be assigned to the freed up GPU resources. So you have less wasted GPU idle time than on Maxwell. Using Dynamic load balancing and improved pre-emption you can improve upon the execution and processing of Asynchronous Compute + Graphics tasks when compared to Maxwell. That being said... this is not the same as Asynchronous Shading (AMD Term) or the Microsoft term "Asynchronous Compute + Graphics". Why? Pascal can’t execute both the Compute and Graphics tasks in parallel without having to rely on serial execution and leveraging Pascal’s new pre-emption capabilities. So in essence... this is not the same thing AMD’s GCN does. The GCN architecture has Asynchronous Compute Engines (ACE’s for short) which allow for the execution of multiple kernels concurrently and in parallel without requiring pre-emption.
>>
>>55718008

Your talking 3-5 FPS difference, it's so minimal it barely matters. CPUs are so under-utilized.
>>
File: 2828685.jpg (29KB, 635x357px) Image search: [Google]
2828685.jpg
29KB, 635x357px
>>55718182
What is pre-emption? It basically means ending a task which is currently executing in order to execute another task at a higher priority level. Doing so requires a full flush of the currently occupied GPC within the Pascal GPU. This flush occurs very quickly with Pascal (contrary to Maxwell). So a GPC can be emptied quickly and begin processing a higher priority workload (Graphics or Compute task). An adjacent GPC can also do the same and process the task specified by the Game code to be processed in parallel (Graphics or Compute task). So you have TWO GPCs being fully occupied just to execute a single Asynchronous Compute + Graphics request. There are not many GPCs so I think you can guess what happens when the Asynchronous Compute + Graphics workload becomes elevated. A Delay or latency is introduced. We see this when running AotS under the crazy preset on Pascal. Anything above 1080p and you lose performance with Async Compute turned on.

Both of these features together allow for Pascal to process very light Asynchronous Compute + Graphics workloads without having actual Asynchronous Compute + Graphics hardware on hand.

So no... Pascal does not support Asynchronous Compute + Graphics. Pascal has a hacked method which is meant to buy nVIDIA time until Volta comes out.
>>
>>55709391
Thanks for posting actually matter of factly. Helpful, a rare thing here on /g/
>>
>>55718117
if you see any benchmark with the 4690k specifically I'd be grateful, trying to look for one.

>>55718185
Ah, so if I had an i7 then the it becomes a big percentage, got it.

Guess I'll just go with an EVGA 1060 or 1070, pay a bit more for the extra warranty, it breaks in 3 years, step it up.
>>
>>55718126
Depends really. The 1060 is around 10-12% faster by default and gains like 5% of performance under new api whereas the 480 gains like 10-15% under new api according to the hitman dev and the ashes of the singularity dev. The 1060 can possibly be like 1-3 fps faster vs a 480 if you have an i7. The 480 will gain much less with an i5 so I still think the 1060 will be faster.
>>
>>55718242
Sorry I meant

If you don't* have an i7.

With I7 the 480 clearly wins.
>>
>>55708184
i wouldnt say that the 480 is kill, i mean. the metric right there says that its 12 precent faster on average, and the 4 gigabyte model of the 480 is cheap for being able to touch the 1060.

what im saying is that is like comparing the preformace gains from the i5 4690k to the 6500k, its there, but the gain is minimal.
>>
>>55718221
>Ah, so if I had an i7 then the it becomes a big percentage, got it.
no

for fuck sakes stop being tards

a haswell i5 is more than plenty.

even a ivy 3770k is more than plenty.

the whole point is don't run a 480 on a turd processor from 2009 / 2010.

and modern intel processor from ivy is more than enough.

you have a fucking devils canyon i5. you're fucking fine.
>>
>>55718336
TL,DR

Whats the big deal? they offer similar preformace
>>
>>55718388
(not the anon you were arguing with)

i have a i7 2600, is that good?
>>
>>55718388
this

anyone who says you need an i7 to run a 480 is an idiot. all you need is a modern processor from 2012 and above.

haswell is flat out more than enough.
>>
>>55718419
If you can overclock that, then yea, otherwise, meh.
>>
>>55718388
Ok, then it all circles back to me looking for some benchmarks to see how the 1060 vs 480 pans out on that processor.
>>
>>55718441
I didn't mean run one, I meant benefit from a 480 over 1060, I do apologise.
>>
>>55718443
it doesnt have the k on it so no,

least i can OC my 370 :D
>>
>>55718419
yes, more so if its the k version since you can overclock.

sandy was 25% faster than first generation i series processors. you can also overclock sandy that approaches near stock haswell level of performance.
>>
>>55718474
well, idk about overclocking, i have this all in a optiplex 990 with cancer cable management and it has the default cooler on it
>>
Anyone have any input on the small non-SC EVGA GTX 1060?

Looking for a cheap model and this seems to fit the bill.
>>
>>55718388
See >>55713313

>i5 3570k
>massive bottleneck in one of the best pc ports in recent history
>10 fps behind stock 1060
>>
>>55718469
no, you don't.

that stupid benchmark went from a first generation i series processor from 2010 all the way to a modern late 2015 / 2016 processor. of course you will see a big boost.

but comparing haswell to skylake? skylake and haswell's single threaded performance on a best day is no more than 10%. average is a little less.

point being, you don't need a skylake nor do you need a i7 to take full advantage of the 480 over the 1060. all you need is a processor that isn't a 2009 potato.
>>
>>55718519
that's dx11 you tard

and if you look at that graph you will notice THERE ISN'T ANY CPU BOTTLENECKING COMING INTO PLAY on the 480. the 480 has 100% usage and mild cpu usage. that isn't a bottleneck going on.

the 1060 does have stronger brute force over the 480. the 480 on most dx11 benchmarks aligns itself with a 390 / 970 level of performance. the 1060 aligns more of its self between a 970 / 390 and a 980 / 390x.

in dx12 and vulkan is where we see the 1060 drop in performance and the 480 starts to take a lead against it. all for the reasons from here:
>>55718182
>>55718199
>>55718089
>>55718007
>>55717856
>>55713107
>>
File: Screenshot_2016-05-12-18-03-32.png (2MB, 1920x1080px) Image search: [Google]
Screenshot_2016-05-12-18-03-32.png
2MB, 1920x1080px
>>55718469
Don't listen to those people. Amd doesn't work anywhere near as well as nvidia does in cpu intensive games if you're not using a top end i7.

>inb4 dx12/vulkan

Face the facts here. If you're buying a gpu now you're probably going to be playing all of the goty games like witcher 3, fallout 4, tomb raider all of which are dx11. If you're going to claim you are only going to play dx12 games then wait another year till you buy a gpu. There are only a handful of dx12 games out right now and you'll save money if you buy a cut price 480 at a time when there will be much more dx12 games.
>>
>>55718588
So it the current mainstream api the 480 loses by 20 fps against a 1060. Gotcha. 1060 it is.
>>
>>55718605
you posted rise of the tomb raider which uses heavy tessellation which chokes pre polaris series pretty bad. switch it over to the dx12 version and it changes. amd gets a nice boost since async off loads a lot of the negative impact of the tessellation.

polaris still takes a hit with tessellation but not to the great extent pre polaris series did.
>>
Enjoy your 'continued' DPC latency and flickering issues Nvidiots. Latest drivers don't fix shit.

>But AMD has shitty drivers!

Nvidiots BTFO
>>
>>55718656
on average the 1060 is around 12% faster than a 480 in dx11 titles. which is except since the 1060 does show to be between a 970/390 & 980/390x while the 480 is pretty much 970/390 performance.

dx12, and future games its different. the 1060 takes a pretty big hit and drops down while the 480 rises up. in many cases the 480 nears 980/390 performance with the 1060 dropping down around to the 970/390 levels.

the only odd ball is tomb raider but it is a glorified nvidia title that doesn't use much async and a lot of tessellation.

actually, on average, the 480 doesn't gain much in tomb raider at all compared to other amd gpus like the furyx.
>>
>>55718838
yeah something has to be going on with the 480 in both tomb raider and ashes.

i wrote about ashes here: >>55711287

something has to be holding it back and its odd.
>>
>>55718838
> the 1060 takes a pretty big hit and drops down

[Citation from reliable source needed]
>>
>>55708349
6700k isn't that expensive m8, and it's not top-end
>>
>>55718905
It has the best single core performance right now.
>>
>>55718873
the problem might lie with it only having 36 cu's and 4 aces while something like the fury nano has 56 cu's and 8 aces.

granted polaris 10 does provide substantial architecture improvements so it should make up for it.
>>
>>55718885
He doesn't have one. Pascal doesn't regress like maxwell did in some games. It actually gains performance.
>>
So, if I'm understanding it, amd gpu get better with time but you won't see any of those benefits if you don't have the latest cpu?
>>
>>55718885
see this entire thread you shill

>>55718199
>>55718182
>>55718089
>>55718007
>>55717856
>>55713157
>>55713107
>>55711287
>>55711172
>>55711150
>>55711136
>>55711120
>>55711110
>>55711103
>>55711093
>>55711086

>http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/73040-nvidia-gtx-1060-6gb-review-16.html
>http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/73040-nvidia-gtx-1060-6gb-review-17.html
>http://www.tomshardware.com/reviews/nvidia-geforce-gtx-1060-pascal,4679-3.html
>http://www.hardocp.com/article/2016/07/19/nvidia_geforce_gtx_1060_founders_edition_review/4
>http://www.hardocp.com/article/2016/07/19/nvidia_geforce_gtx_1060_founders_edition_review/5
>http://www.guru3d.com/articles_pages/geforce_gtx_1060_review,15.html
>>
>>55719028
fucking no ignore the better cpu bullshit unless you're running a potato from 2009.
>>
>>55719107
Are you stupid? Where does it show the 1060 getting lower performance in dx12 than in dx11?
>>
>>55718991
it actually does

>>55718182
>>55718199

>There are not many GPCs so I think you can guess what happens when the Asynchronous Compute + Graphics workload becomes elevated. A Delay or latency is introduced. We see this when running AotS under the crazy preset on Pascal. Anything above 1080p and you lose performance with Async Compute turned on.

>We see this when running AotS under the crazy preset on Pascal. Anything above 1080p and you lose performance with Async Compute turned on.
>We see this when running AotS under the crazy preset on Pascal.

>http://www.hardocp.com/article/2016/05/17/nvidia_geforce_gtx_1080_founders_edition_review/6
>The very first thing we notice enabling DX12 is that both NVIDIA GPUs lose performance in DX12 in this benchmark.
>However, while there is a loss in performance you will note the new GeForce GTX 1080 Founders Edition doesn't lose as much as the Maxwell GPU GeForce GTX 980 Ti in DX12. The GeForce GTX 980 Ti loses more performance while the GTX 1080 Founders Edition is only a minor loss in performance.
>>
>>55719159
no only you are

lets do a little thinking.

the 1060 enjoys a nice lead over amd in most dx11 titles.

switch over to dx12 & vulkan, that lead not only drops, but regresses. pascal isn't strong in dx12 & vulkan like it is in dx11.

now look into the way pascal handles async we find out why:
>>55718089
>>55718182
>>55718199

games where there is light async usage pascal will see a small boost
games where there is medium async usage and pascal will not see a boost nor a regression
games where there is heavy async usage pascal will see a regression
games where is more single sided, more compute than graphics, not so much compute + graphics, pascal will see a decent boost
>>
>this one lowlife who has spent like a week writing up his mini essays everyday in gpu threads to shill for amd
>tfw 90% of posts in this thread are that one guy
>tfw he'd rather shill for amd than attempt to get pussy

amdrones everyone.
>>
>>55719228
>l-let's do a l-little thinking

So you have no citation then. Ok.
>>
https://youtu.be/wton4UVlzPo

480 getting utterly destroyed. I'd gladly pay that extra £20 for the Zotac amp over any reference 480.
>>
>>55719258
Did you not look at any of the links I posted nor read anything I wrote that shows it?

http://www.guru3d.com/articles_pages/total_war_warhammer_directx_12_pc_graphics_performance_benchmark_review,6.html

Look at warhammers dx12 results. The fury fucking x is faster than the 1070. The fury x is second to the 1080.

Even the regular 390 is faster than the 980.

Go back to dx11 and nvidia did drastically better than amd. Amd not only gained but Nvidia lost performance compared to its dx11 counterpart.

Why? Heavy async
>>
>>55719547
You are seriously retarded in the head. Amd gaining performance =/= nvidia losing performance. The nvidia cards barely gain anything when shifting to new api whereas amd gain more because of gcn paired with the removal of driver overhead under new api.

Off yourself.
>>
>>55709059
no but they will have an i3 which for gayming porposes will perform close to the i7 rather than an ancient cpu
Thread posts: 341
Thread images: 84


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.