[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

http://www.geforce.com/whats-new/ar ticles/nvidia-titan-x-pa

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 63
Thread images: 16

File: nvidia-titan-x-pascal-key-image.jpg (867KB, 2560x1440px) Image search: [Google]
nvidia-titan-x-pascal-key-image.jpg
867KB, 2560x1440px
http://www.geforce.com/whats-new/articles/nvidia-titan-x-pascal-available-august-2nd

http://www.geforce.com/hardware/10series/titan-x-pascal
>>
>>55699189
Amd is finished with this. In every price range Nvidia is better than AMD
>>
august second? the non-reference 1080 still are unavailable everywhere
>>
>>55699203
I don't see how the Titan changes that.
>>
>>55699786
AMD's $1200 cards no longer stand a chance now!
>>
I have an i5 4670 should I get the 480 RX or the GTX 1060?
>>
>>55699822
I think this video clears up some things:
https://www.youtube.com/watch?v=V54W4p1mCu4
(outside of shill)
both cards are worth the money, I think that under Vulkan & DX12 the RX480 is more promising. the 1060 is also a very good card, a bit more expensive than the RX480, but more power efficient and has a bit more power than the RX480 in the older games, but not under Vulkan / DX12.
>>
>be Canadian
>cheapest Titan X is $2500
What did they mean by this?
>>
>>55700257
>Adored AYYMD asskisser shill
>>
>>55699203

I don't know man, both brands cover different price ranges atm. They can very well coexist.
>>
the reason why nvidia had stronger dx11 drivers was because they were multi-threaded which helped lower driver overhead since it could be spread out across multiple threads.

but another reason was because of their use of a software scheduler.

one of the reasons why fermi ran so hot was because it utilized a hardware scheduler, just like all amd gcn based cards do. hardware scheduling draws a lot of power and more power means more heat. why did they use a hardware scheduler? a hardware scheduler will always be faster than a software one. less overhead, and the gpu can do it much faster than software.

the problem with a hardware scheduler? once built, you cannot modify it. you have to build a whole new card if you update the hardware scheduler.

but nvidia wanting to move on from their house fire fermi's decided to remove hardware based scheduling with keplar and beyond. this is the main reason why keplar used far less power and ran cooler than fermi. nvidia realized with dx11, you didn't need a complex hardware scheduler. most of the scheduler went under utilized and was overkill. with dx11 multi-threading capabilities, and making their drivers multi-threaded, it help alleviate a lot of the driver overhead one would endure with utilizing a software scheduler. in turn this gave them the opportunity to now have more control over scheduling. able to fine tune the drivers for individual games. well, they had to. this caused a lot of work on nvidia's driver team, but it helped them max out every ounce of juice they can get from their cards and lower power and reduce heat.

maxwell continued this by removing more hardware based scheduling.

the problem? dx12 and vulkan need a hardware scheduler to be taken full advantage of. you need it for the complex computations of async and to manage compute + graphic operations at the same time. they're complex, and you need the performance.
>>
>>55700537
and this is why nvidia cards cannot do async properly. not only do they not have the hardware needed to run compute + graphics at the same exact time, but they lack the complex, high performance hardware scheduler to run them. their hardware can only do compute or graphics one at a time. with pascal nvidia did some tweaks to help speed up the switching between compute and graphics, but it still isn't optimal. its a bandaid. pascal still comes to a crawl if it recieves to many compute + graphic operations. it cannot swith fast enough.

whats funny is nvidia knew what they were doing. they just didn't think compute was ever going to be useful in graphics and games.

here's a nice article from keplar's launch done by anandtech:
>http://www.anandtech.com/show/5699/nvidia-geforce-gtx-680-review/3

>GF114, owing to its heritage as a compute GPU, had a rather complex scheduler. Fermi GPUs not only did basic scheduling in hardware such as register scoreboarding (keeping track of warps waiting on memory accesses and other long latency operations) and choosing the next warp from the pool to execute, but Fermi was also responsible for scheduling instructions within the warps themselves. While hardware scheduling of this nature is not difficult, it is relatively expensive on both a power and area efficiency basis as it requires implementing a complex hardware block to do dependency checking and prevent other types of data hazards. And since GK104 was to have 32 of these complex hardware schedulers, the scheduling system was reevaluated based on area and power efficiency, and eventually stripped down.
>>
>>55700550
>The end result is an interesting one, if only because by conventional standards it’s going in reverse. With GK104 NVIDIA is going back to static scheduling. Traditionally, processors have started with static scheduling and then moved to hardware scheduling as both software and hardware complexity has increased. Hardware instruction scheduling allows the processor to schedule instructions in the most efficient manner in real time as conditions permit, as opposed to strictly following the order of the code itself regardless of the code’s efficiency. This in turn improves the performance of the processor.

>Ultimately it remains to be seen just what the impact of this move will be. Hardware scheduling makes all the sense in the world for complex compute applications, which is a big reason why Fermi had hardware scheduling in the first place, and for that matter why AMD moved to hardware scheduling with GCN. At the same time however when it comes to graphics workloads even complex shader programs are simple relative to complex compute applications, so it’s not at all clear that this will have a significant impact on graphics performance, and indeed if it did have a significant impact on graphics performance we can’t imagine NVIDIA would go this way.

>What is clear at this time though is that NVIDIA is pitching GTX 680 specifically for consumer graphics while downplaying compute, which says a lot right there. Given their call for efficiency and how some of Fermi’s compute capabilities were already stripped for GF114, this does read like an attempt to further strip compute capabilities from their consumer GPUs in order to boost efficiency. Amusingly, whereas AMD seems to have moved closer to Fermi with GCN by adding compute performance, NVIDIA seems to have moved closer to Cayman with Kepler by taking it away.
>>
>>55700563
important part here:
>NVIDIA is pitching GTX 680 specifically for consumer graphics while downplaying compute
>downplaying compute

it's also why in nvidia's "dx12, does and don'ts" they state not to run to many compute + graphic operations at the same time.
>https://developer.nvidia.com/dx12-dos-and-donts

their hardware cannot handle it. while amd's gcn not only can, but shines brighter when its under heavy async load.

here's some more interesting reads on nvidia's async debacle:
>http://www.overclock.net/t/1606224/various-futuremarks-time-spy-directx-12-benchmark-compromised-less-compute-parallelism-than-doom-aots-also

yes its mostly focused on the time spy issue regarding their usage of async, but it does dwell into nvidia's architecture limitations.

also the use of the hardware scheduler is why amd gpu's used more power and ran hotter than nvidia's since the keplar and gcn 1 days. if nvidia slapped a hardware scheduler on pascal, their gpu's would not just use as much, but most likely use more than amd's since nvidia is on 16nm instead of 14nm like amd.
>>
File: RotTR-DX11-vs-DX12-640x640.jpg (75KB, 640x640px) Image search: [Google]
RotTR-DX11-vs-DX12-640x640.jpg
75KB, 640x640px
>In the previous pages, we compared the performance of Rise of the Tomb Raider's original Direct 12 patch and the performance of the game's newest DirectX 12 implementations, seeing a higher minimum framerate performance in the majority of cases and improved performance in all cases for AMD's R9 Fury X GPU.

>Now we will compare the DirectX 12 and DirectX 11 versions of the game with this new patch, as while the DirectX 12 version has improved we need to know if this new version actually provides users better performance than what we can achieve with the older DirectX 11 API.

>With AMD's R9 Fury X we see a performance improvement when using DirectX 12 in all cases, whereas Nvidia's GTX 980Ti actually sees a performance decrease in all cases except 1080p performance, where we expect that the CPU performance benefits of DirectX 12 may have had more of a benefit than any potential gains in GPU performance.

>All in all it seems that those with AMD GCN 1.1 or newer GPUs will be better off playing Rise of the Tomb Raider in DirectX 12 whereas Nvidia users are better off using DirectX 11.

>http://www.overclock3d.net/reviews/gpu_displays/rise_of_the_tomb_raider_directx_12_performance_update/5

whats important to note is that rise of the tomb raider is a nvidia sponsored, and nvidia gameworks title. so yes, the 980 ti did come out ahead at 1080p, and can argue hurr dx12 don't matter, what's important to note how nvidia didn't benefit from dx12 at all, and in higher resolutions, suffered regressions.
>>
>>55699189
Wasn't there already a Titan X? When will they stop using this meme naming scheme and just stick to the traditional numbered scheme? Was there literally any reason to stop using the 690 tier?
>>
>Nvidia DX11 drivers are good with little cpu overhead
>they don't gain much from the move to dx12
>AMD DX11 drivers are shit with lots of cpu overhead
>they gain alot from DX12.

Literally who cares
>>
>>55699822
Poorfags should buy the absolute cheapest option
>>
>>55700700

>Titan

>Titan Black

>Titan Z

>Titan X

>Titan X

nvidia why

they could've easily did something unimaginative like Titan X2 or P
>>
File: timespy-3.png (270KB, 602x869px) Image search: [Google]
timespy-3.png
270KB, 602x869px
>http://www.pcper.com/reviews/Graphics-Cards/3DMark-Time-Spy-Looking-DX12-Asynchronous-Compute-Performance

when we take a look at time spy we can see some pretty interesting results.

when we look at the total % increase with async on & off one thing is made clear, amd wins hands down. even the humble $200 480 nets a higher increase in performance with async on than the 1080. maxwell flat out did not receive a boost at all.

there's a reason for that. according to pcper:
>Now, let’s talk about the bad news: Maxwell. Performance on 3DMark Time Spy with the GTX 980 and GTX 970 are basically unchanged with asynchronous compute enabled or disabled, telling us that the technology isn’t being integrated. In my discussion with NVIDIA about this topic, I was told that async compute support isn’t enabled at the driver level for Maxwell hardware, and that it would require both the driver and the game engine to be coded for that capability specifically.

which shouldn't come to a surprise, maxwell can't truly do async at all. its terrible at switching back and forth between compute and graphics as noted above. pascal does bring some improvements with this regard but there is more to the story.

the problem with time spy is that it doesn't fully take advantage with async. their designed async in that benchmark to the way nvidia stated in their "dx12 do's and don'ts."
>Try to aim at a reasonable number of command lists in the range of 15-30 or below. Try to bundle those CLs into 5-10 ExecuteCommandLists() calls per frame.

as noted above with the overclock.net link, time spy doesn't fully utilize async. it doesn't use a lot of it. it also doesn't use a lot of parallelism, meaning its not throwing out a lot of compute & and graphics operations at the same time. it feeds it mostly compute, sending a few compute operations at once, then switches to a little graphics, then back to compute. it does it in a way that doesn't over saturate pascal's dynamic preemption.
>>
>>55700839
whats fascinating to note from this that even though time spy's async is designed around nvidia's dynamic preemption, instead of dx12 & vulkan documents of parallelism, amd still did better in the % of boost increase. even if game utilizes nvidia's way of doing it, gcn is more efficient at doing it.
>>
>>55700791
Because pascal is literally shrunken maxwell on speed. Thus they didn't change the name of the new King Again
Aka more of the same just additional $200 premium this time because they've gotten idiots to just give them extra money for no reason
>>
File: 1469136985372.png (498KB, 1920x1080px) Image search: [Google]
1469136985372.png
498KB, 1920x1080px
when we look at picture related with doom - vulkan you might notice something, the 1060 wins with weaker processors, but loses to the 480 with stronger processors.

with the way gcn works, it scales more with a stronger processor than a weaker one.

in doom - vulkan, async is enabled on amd cards and async is used HEAVILY in doom - vulkan. the older cpu's cannot feed the ace's and cu's fast enough. you still get a boost, but not as big. slap in a 6700k and, well in doom, it turns that $200 card into a $400 one. its able to keep up with the 480 and feed it plenty.

nvidia on the other hand doesn't have async enabled. id disabled it since it gives nvidia cards a regression and they're waiting for nvidia to release a driver to reenable async for nvidia cards so the only benefit nvidia is getting is the general less driver overhead. which is why nvidia gets a bigger boost with older cpus and not newer, stronger cpus. the older ones cannot keep up with the driver overhead, so switching over to dx12 frees up a lot of resources for older cpu's while the 6700k is strong enough that it doesn't matter so nvidia see's less of a boost.

thats why you'll notice the stronger the processor becomes, the less of a boost the 1060 receives, and the higher the boost the 480 starts to receive.

gcn is built to be fed, and utilize async. the more you feed it, the more powerful it becomes. give it a ton of things to do and it shine. vulkan / dx12 will always give amd a boost but the stronger the cpu, the more boost you'll get.

if you're building a pc now a simple 6100 is more than enough for a 480. if you're on a first generation i7, it be best to upgrade. regardless if its amd or nvidia. if you're on 2600k sandy, ivy, or even haswell, you'll fine and don't need to upgrade. you will see a stronger boost with the 480 than the 1060 in this title.

what i love about this one is that it shows these older processors bottlenecking the $200 480.
>>
>>55700973
Do people actualy still game on 1080p?
>>
>>55701032
people who value their framerate do.
>>
What's this? People posting actual facts instead of memes in a GPU thread?
>>
>>55701032
i can imagine the person buying a $200 card is most likely going to be a 1080p screen.

eh, there still is even a lot of people on 1080 with 980 ti's, fury x's, and the new 1070 / 1080. though i can see over the course of this year they will probably up upgrading to 1440p screens but the under $250 segment? probably still stick to their 1080p screens. maybe just moving over to a 144hz 1080p screens for themselves.
>>
>>55701044
So you mean people with shit tier graphics cards
>>
>>55700689
also rise of the tomb raider doesn't use a lot of async as well and its done in a similar manner to nvidia's do's and don'ts. try to run it more singular than parallel.
>>
I just want my 1080 ti (for under $450) already so I can upgrade
>>
File: ASUS-24inch-1ms.jpg (240KB, 1050x700px) Image search: [Google]
ASUS-24inch-1ms.jpg
240KB, 1050x700px
>>55701095
I mean people who don't have shit tier monitors
>>
>>55701032
>Not using VG248QE with g-sync module installed
>>
File: 1451383104238.jpg (188KB, 600x450px) Image search: [Google]
1451383104238.jpg
188KB, 600x450px
>>55701131
>1080p
>tn
>not double shit tier
>>
File: 706.gif (831KB, 500x482px) Image search: [Google]
706.gif
831KB, 500x482px
Fucking noob here. What's all this Pascal shit and did I fuck up by buying a 1070 instead of waiting for some new shit? It's my first video card and I have an i5 4690.
>>
i also want to note with ashes, its a very heavy async compute, mostly compute, game. they do use compute + graphics, and in parallel, but A LOT more compute ques than graphics. its why its sorta the odd ball with the 1060 with it being so close to the 480. the 480 leads in most of the benchmarks, but only by as few fps. 1 - 5. its also why the 1070's and 1080's showed gains (small, not as big as amd) for the first time for ashes instead of null gains or regressions like maxwell did. it doesn't overload nvidia's updated preemption on pascal.

one thing i would like to note though, is how much of a gain, if any, the 1060 receives in dx12 vs dx11 in ashes. i couldn't find any dx11 benchmarks to see the gains. i know the 480 receives massive gains since i was able to find some for it.
>>
File: speccy.jpg (56KB, 499x425px) Image search: [Google]
speccy.jpg
56KB, 499x425px
>>55701032

I do, because my monitor's got a 144hz refresh rate and a GTX 970.

If I buy a 1080 ti, I'll still be on 1080p because I value high fps over resolution. 1080p is more than enough for me.

>>55701144

>got the model before the module was available
>tfw the module isn't even sold anywhere anymore
>>
>>55701158
you didn't fuck up, its just not that great of a card for future proofing for dx12 and vulkan. if the title doesn't use much async it should be fine, but you won't be getting that big of a boost with dx12 over dx11. if it uses a lot of async you might not get a boost at all or a regression over dx11.

also have you tried the latest nvidia driver hotfix to fix the latency issue that ALL pascal cards face, even if you haven't noticed it yet?

>https://forums.geforce.com/default/topic/951723/geforce-drivers/announcing-geforce-hotfix-driver-368-95/
>https://forums.geforce.com/default/topic/941579/geforce-1000-series/gtx-1080-high-dpc-latency-and-stuttering/
>http://www.overclock.net/t/1605618/nv-pascal-latency-issues-hotfix-driver-now-available

also have you noticed any of the screen flicking that a lot of pascal users have been enduring for two months now?
>https://forums.geforce.com/default/topic/939358/geforce-1000-series/gtx-1080-flickering-issue/
>>
>>55701174
That's what everybody on 1080 says, until they move to a 1440p or 4k screen and never want to go back.

Besides, I'm already hitting 80-90 fps on ultra settings on 1440p in many games with my 980ti, a 1080ti will definately go 100+
>>
File: Untitled.png (551KB, 1920x1080px) Image search: [Google]
Untitled.png
551KB, 1920x1080px
i would also like to note i'm no amd shill

i'm a 1080 user btw. i'm the op whos been posting the async shit.
>>
But can it run Quantum Break at 1080@60fps with scaling turned off?
>>
File: 1399617134770.gif (766KB, 402x360px) Image search: [Google]
1399617134770.gif
766KB, 402x360px
>>55701199
I haven't received it yet, but I will tomorrow if the tracking estimate is accurate.

Are you saying I (probably) should have went lower tier or waited for something new? Or went full retard with a 1080?
>>
>>55701279
well i went full and got a 1080 myself. i'm a dissatisfied customer since i get flickering and latency.

i have to run my card at maximum performance all the time and always switch back and forth between 144hz and 60hz the back to 144hz to get the flickering to go away. nvidia has not fix for it at all since they can't figure out whats causing this for so many users.

the latency they tried fixing but the hotfix doesn't fully work. you'll only notice it when doing A LOT of stuff on your computer or trying to do audio work.

we both should of waited for something new~~~~

pascal isn't terrible. its just not the greatest for future dx12 & vulkan games. its all going to come down to how much async is used. pascal will never get high boosts with async like amd does if async is used lightly, and with some of them one might be wondering if the game would of actually ran better in dx11 than dx12 on nvidia hardware.
>>
>>55701340
also i can only assume nvidia will finally bring back a hardware scheduler and offer full hardware parallel async support with volta.

i can't see them going yet ANOTHER generation that lacks it. this is a massive thorn in their side.

i'm really interested in vega though. especially since nvidia dungoofed and slapped gddr5x on $1,200 new pascal titan x instead of hbm2 like so many were hoping for. vega is offical from amd that it will use hbm2. in dx12 & vulkan, and at 1440p and 4k, vega with hbm2 should be a beast.

also amd does plan on using gdd5x in the future too. so its newer 480 variants could use it and help give it more of a boost in 1440p market. a few months ago they did state they can easily upgrade polaris 10 to use gddr5x over gddr5 in the future when gdd5x becomes in high supply. right now its still fairly low.
>>
>>55700791
Where's the Titan Y?
>>
also a way to help people understand the difference between amd's gcn architecture and nvidia's keplar and above architectures like pascal work is to think of it as a dual core (amd gcn) vs single core (nvidia keplar+maxwell) and with pascal, single core + hyperthreading.

its like using a dual core but running single threaded games on it. one core sat there going unused. the dual core is going under utilized. this was the case with dx11 titles.

with dx12 & vulkan, both cores can finally have the opportunity to be used if developers utilize it. tapping that second cores unleashes a ton of extra performance. nvidia is stuck on a single core design. pascal has hyper-theading but its no where close to the performance of a true dual core.

it might not be the best technical way to describe it, but gives you a rough idea.
>>
>>55701441

Ask nvidia.
>>
Who's going to buy this?
>>
>>55701572
980 Ti owners.
>>
>>55701207
1440p is a meme.
>>
>>55701605
But It's twice the price of a 980 ti at launch
>>
File: 1426273216109.png (225KB, 423x491px) Image search: [Google]
1426273216109.png
225KB, 423x491px
1070 noob again. I understand almost nothing being discussed but am just happy about not being stuck using shitty onboard graphics.

Do ya'll think everything will be fixed with drivers and updates and shit by the time I'm done finishing a backlog of Xbox 360/PS3 and maybe slightly higher level games. I just hope nothing's fucky with the games I will be playing.
>>
>>55701686
I meant 360/PS3 level graphics, not actual console games.
>>
>>55701441
Apparently feminism and women in tech quotas are being successful at fighting the oppressive gene.
>>
File: 1070opengldoom.png (38KB, 742x827px) Image search: [Google]
1070opengldoom.png
38KB, 742x827px
>>55701686
its not a software issue, its a hardware limitation with nvidia cards.

this cannot be fixed via drivers.

the 1070 is a fast card, and amd did dungoofed by holding out till later this year to release their 1070 and 1080 counter-parts.

but when it comes to dx12 titles, your card will be holding itself back due to nvidia's failure. no appeal of dx12 due to small gains if async is used in a fashion that benefits nvidia, or no gains at all, and possibly regressions vs dx11.

the 1070 has enough brute force to come out ahead of a 480, but when you see pics like pic related, and compare it to >>55700973 , where the gap is 20 fps with a $200 card and a $400 card, where async is used properly and heavily, really makes you wonder why nvidia didn't take compute seriously.
>>
>>55701934
youre onto something there but thats not how genders work in 21st century internet era biology
>>
>>55701934
kek'd
>>
File: laffing dog.gif (2MB, 294x210px) Image search: [Google]
laffing dog.gif
2MB, 294x210px
>>55701934
you got me there
>>
File: titanxpascal1.jpg (143KB, 1200x801px) Image search: [Google]
titanxpascal1.jpg
143KB, 1200x801px
The guy that owns Nvidia still can only afford one leather jacket.

When will you see the truth /g/?
>>
>>55703411
not just one leather jacket, but one outfit.
>>
>RX480 will be the best card in the 200-300 range!
>GTX 1060 annihilates it

>Radeon Pro Duo is the most powerful gpu ever made!
>Titan X annihilates it

AMDlets
when will they learn?
>>
File: 1451624378876.png (13KB, 569x553px) Image search: [Google]
1451624378876.png
13KB, 569x553px
>>55701605
>mfw I have a 980 Ti and am going to buy this
>>
>>55701131
>Asus

So not what you're posting, obviously.
>>
File: 1140914076953.jpg (572KB, 900x1200px) Image search: [Google]
1140914076953.jpg
572KB, 900x1200px
Are there any decent non-reference 1070s yet, or should I keep waiting?
Thread posts: 63
Thread images: 16


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.