if you were wondering how the 390x compares to the 290x at 1080p, then here is a good comparison.
How retarded are you?
Oh look the 390X beat out the 970 at 1440p even with Nvidia's optimization
Shit you don't even want to know what happens when the Tessellation heavy god rays are turned off
Seriously though I have no idea what's happening at the 1080p tests
I mean I guess I do considering Bethesda games are generally low poly count, and the shader pipeline isn't strictly per pixel so it's the same amount of work all the way up to the pixel shaders. But still, the 970 loses a quarter of its performance for the same load.
Does that mean AMD is objectively better at pixel shaders?
As an owner of a fury x, they left it off the list because it would've cucked the list. If I put my shit on ultra at 1080p easy average 95 - 100 fps
Nice graph op. I'll just pretend that 970 doesn't exist. Why would I want the same or better performance, better min frames, better drivers, physx, shadowplay, game stream, vr support, mfaa, low power consumption, quieter cooling, and free AAA titles with my purchase for a cheaper price anyway?
>he posts a reply of 4k red on a game made solely for nvidia cards
Posting benchmarks for one is a meme
But posting different games, different res, different settings and ones from different people is just sad. You're too poor to afford a 980ti-H or a Fury X. So why even post other peoples shit
>lowering the graphics quality
Here's an updated DX12 benchmark
The AMD catalyst/crimson suite has a tesselation limiter built into it that can take nivida gameworks features and prevent them from over-tessellating.
All you have to do is set the limiter to x16 and your AMD card will run any gameworks game like fallout4 or witcher3 with hairworks and godrays set to maximum and the FPS will be higher than an equivalent nvidia GPU with those same settings turned on
thats literally all you have to do and because that isnt done in these benchmarks they are automatically invalid because they are using x64 tessellation that gives you no visual improvement in game but cuts performance on both AMD and nvidia GPU's
nvidia not having a limiter in geForce to match AMD's limiter is a real shot in the dick of their customers too
Its really sad that we can't even trust benchmarks anymore
diplomatic way of saying "Shit we put in the game specifically to fuck AMD users"
and yes, pic related is a tesselated water table. That you never see. But is still rendered.
There is Zero difference between x32 and x64 tesselation even at 4K resolution I have actually check with a fucking magnifying glass and it isnt there.
there is some noticeable difference between x16 and x32 but its minuscule and the FPS boost you get from x16 trumps the slightly better quality of hair effects and light particles.
go to the bottom corner of your desktop and click on the little red icon you will either get a catalyst or Crimson style menu depending on what drivers you have If its CCC (control center) just go to game settings and scroll down to force tessellation in applications and check the box and set to x16
In crimson you set the profile for each game you have installed so go to games and click on the game you want to tweak and then set tesselation for that game. instead of forcing the setting for all games it will only set it for that specific game you can even set Overclocking profiles so certain demanding games overclock your GPU by a certain ammount with the power settings pre-set and on other games is just runs stock a really handy upgrade from AMD
yes it does but it fucks AMD cards more which is precisely why they do it. As a direct result of this happening in Crysis2 i beleive AMD implemented a tesselation limiter to allow AMD users to get around this.
You have no equivalent feature in your Nvidia Drivers so even though they can run it better than AMD you do not have the option to prevent it to improve performance like the AMD users can
its extremely fucked up
it doesn't. nVidia cards run at the same settings and get better results. You might say game is biased, but bench isn't invalid. Your way is invalid, cards should not get any special treatment in benchamrks.
x16 isn't too much. You can actually see the difference if you look closely even between x32 and x64 in games like Witcher 3. x32 is more or less optimal for fast moving games, x16 if you can't push mightr is fine, don't go any lower.
In crimson gameing-game you want to edit-tesseletion mode
fallout 4 is a clusterfuck of bad game design people are experience massive framedrops using intel i5's because the faggots at bethesda never optimized the draw calls and sscheduling for a 4-core 4-thread CPU they build the engine for the 8-core(two disabled) jaquar AMD cpu in the consoles so when the 4-threads are used up the game loads the extra tasks and calls from the extra 2 threads in a fucked up order on the four cores and causes bad timings and huge idle gaps.
also the game over renders shadow texturemaps which causes massive FPS drops in the city for both Nvidia and AMD GPU's
bethesda is just a fucking terrible game developer like EA levels of terrible
>Buy Nvidia, goy!
>Trust benchmarks, goy!
>What do you mean you caught us tesselating the absolute shit out of one of the most common objects in a game? N-no it was an accident
there products are competative but the the advancements they are making for the next series is a major performance changer in the market. And Nvidia pascale got delayed so they will beat them to market.
the most important upgrades are Zen/Polaris APU's which will completely upset the Mobile market and getting those APU's into next gen game consoles so that the grapics standards of videogames goes up
Are the frighteningly low FPS scores on the 960 and similar models because of poor optimization for PC or bottlenecking?
I mean, that's a big leap from the 960 to the 970, but then again it's a ~$140 spread price-wise too.
the gtx 960 is a comically bad design its got fewer CUDA cores than the 760 and it only has a 128-bit memory bus.
the cuda cores arent an issue so much as the memory but that tiny fucker throttles at 1080p and makes it impossible for the GPU to fully utilize 4gb of VRAM even if you have that on your gtx 960
its a hardware bottleneck not a software one. however if you can find a 2gb 960 for REALLY fucking cheap you get nice budget result when you pare it with a cheap as fuck i3 dual core AMD tends to have issues with dual cores so if your too poor for a quad core chip 960 is your GPU
Zen isn't due until q4 2016 and it'll have the ipc of sandy bridge. It's a fucking joke.
The CES Polaris demo showcased power efficiency and made zero mentions of Performance. But that doesn't stop cocksuckers like you from dreaming.
Intel will release their Broadwell-E chips in Q2.
Massive sale spikes from Skylake thanks to non K overclocking.
Pascal releases Q2, just in time for VR.
Polaris releases Q3, overpriced, underwhelming, and shit just like Bulldozer/Trinity/Steamroller/Fiji.
Zen releases Q4.
Nothing changes. AMD remains on life support - if they're lucky.
ahh the nvidia troll has arrived
>Zen isn't due until q4 2016 and it'll have the ipc of sandy bridge. It's a fucking joke.
whoops you don fucked up! the IPC of zen is equivalent to and slightly higher than haswell and Zen will have unlocked multipliers across all binning tiers allowing you to overclock cheaper variants so no forking out extra dosh for a "k" variant from jewtel
Also all Zen chips have Hyperthreading as standard which means you are going to get a 4-core i7-K equivalent CPU for the price of a shitty i5 cpu thats a major fucking deal thats gonna shake up the pricing scheme of intel chips
>Intel will release their Broadwell-E chips in Q2.
prices are exactly the same with a slight increase and almost no performance gain over skylake
>Massive sale spikes from Skylake thanks to non K overclocking.
that only applies to their dual core i3s in the skylake line you fucking retard
>Pascal releases Q2, just in time for VR.
pascale wont relase until Q4 and their top end flagship cards have been delayed until Q1 2017
>Polaris releases Q3, overpriced, underwhelming, and shit just like Bulldozer/Trinity/Steamroller/Fiji.
im gonna save my response to this for a new post cause its gonna be a long one.
>Zen releases Q4.
this is literally the only thing that you have said that is factually correct
i laugh at nvidiaslaves.
i recently sold my 290x that i have had since they cam out.
3 year old cards where sold for the exact price i got them for.
then i bought a 980ti which dont get me wrong is a great card. it just is. no denying that.
but it cost a bit more than the dual 290x, and for some reason it just cant keep up on my 1440p 120hz IPS.
so i returned it and found another guy selling two sapphire 290x for about 150$ cheaper than the 980ti.
so now im back to 290x CF, 120fps constant in Battlefront on the 1440p monitor, this is with the real life mod actived lol.
the 980ti clocked to 1470mhz barely did 90fps. and since its winter time. the massive heatoutput from the 290x's (about 750watts from the wall) keeps me warm :)
Thanks based AMD for the fps and the warmth
>they build the engine for the 8 core jaguar cpu of consoles
>not realizing that they are just using a modded version of the engine that powered Morrowind
>not realizing that the game runs on consoles even worse
At least you didn't get two 980ti's.
(this is an old chart fyi - the fury x is faster now)
>Polaris releases Q3, overpriced, underwhelming, and shit just like Bulldozer/Trinity/Steamroller/Fiji.
Polaris is a major restructuring of the GCN architecture for 14/16nm FinFET and HBM gen 2. It adds the finishing touches on DX12 features. both Maxwell and GCN each had a few DX12 features but both also had some missing. Polaris being first to market in June/July means they will be the first fully implement tier 3 graphics card ever made with 100% of all DX12 features.
The power efficiency has gone through the fucking roof they demoed a chip at CES and it was drawing half the wattage of a 750ti this is a major deal for mobile GPU's in laptops
Their flagship Greenland is going to be an 8000 core monster with over one Terrabyte per second of memory bandwidth and 16gb of stacked VRAM
The replacement for the Fiji architecture is going to have double the bandwidth and run WAAY higher core clockspeeds because of FinFET and sell for 350-450$ (this will be the card most enthusiast buy) Their replacement for hawaii is going to be a major boost in performance with DX12 fully implemented and sell for the 200-250$ range (this will be the biggest seller im betting)
and their replacement for tonga will be a 1080p monster that runs everything maxed out for close to it at 1080p for 100-150$ (the budget card)
even if they make major performance improvements to pascale its late to market and will sell way to high for what it delivers. k
>The power efficiency has gone through the fucking roof they demoed a chip at CES and it was drawing half the wattage of a 750ti
It was compared to a 950, not a 750ti. Plus it is believed that chip was on the samsung LPP process, not the 16nm high performance process from glofo so we don't know how power draw scales.
>FinFET DUDE LMAO
ff isn't magic you retard. you might be able impress dumb lurkers with your bullshit but to regular /g/ browsers it's evident you have no fucking what you're taking about. Stop fucking repeating the node process and trying to pass it off as something magical. your post is literally garbage with zero technical information other than finfet lmao
Jesus fuck it's like watching that ahmed guy saying shit like solder cpus
>950 not 750ti
i stand corrected, sorry about that.
the performance process will be less efficient than LPP but they are mixing and matching both processes for desktop and mobile parts for zen CPU's polaris GPU's and combined APU's
You need look no further than the clockspeed capabilities of Intel chips when they switched from planar to finFET
you will literally see that same kind of gains only applied differently because GPU arch this time instead of CPU arch
you are one retarded faggot you know that?
>whoops you don fucked up! the IPC of zen is equivalent to and slightly higher than haswell
I FUCKING WISH
NO I REALLY DO, NO JOKE
BUT YOU'RE FUCKING DREAMING
ZEN WILL BE EXPENSIVE
PERFORM LIKE SHIT
AND LATE LATE LATE
New improvements also bring more tessellation ability.
>Their flagship Greenland is going to be an 8000 core monster with over one Terrabyte per second of memory bandwidth and 16gb of stacked VRAM
8GB HBM sounds more feasible. We know they'll have access to it though it's highly unlikely they'll need it for a single chip. Even the 980Ti's 6GB is a tad too much. 6GB HBM even would be more than enough since it has such insane compression abilities.
>The replacement for the Fiji architecture is going to have double the bandwidth and run WAAY higher core clockspeeds and sell for 350-450$
We really don't know whether or not clockspeeds in increase as dramatically as you think they will.Also when there's a new technology released as a first in the industry then the prices aren't exactly affordable. See Fiji.
Just calling out amd faggots on their bullshit.
They ALWAYS try to hype shit up
here are some funny posts about the fury x a week before release, see how similar they are to the bullshit prediction posts here?
They should build a 16gb card just for lulz factor but i agree 8gb of HBM 2 will be all that is required for quite some time.
>We really don't know whether or not clockspeeds in increase as dramatically as you think they will.Also when there's a new technology released as a first in the industry then the prices aren't exactly affordable. See Fiji.
Polaris is a full redesign for FinFET and is being applied to all classes of GPU so we can expect significant clockspeed boosts at least for desktop parts the only thing polaris chips will have in common with previous GCN cards like tonga and fiji is the shador count to be honest and a few other features like ACE's and maybe the same number of ROPs alot of stuff is getting moved around however.
the trick is figuring out how much has changed like whether the replacement for hawaii will be HBM or GDDR5X
Not the guy you're arguing with but we already know the IPC is around 40% higher than Excavator so it should be similar to Haswell.
Even if it's a bit lower it wont matter if they decide to offer the cpus unlocked.
This. If i can get a cheap lower binned AMD chip with 4/8 cores or 6/12 cores and an unlocked multiplier that is even remotely in the ballpark of haswell in terms of IPC i will never buy intel again and i bet alot of other people wont either....
You actually screencapped random people on forums speculating about GPU's from over a year ago just to post them here?
you should be getting a check monthly from the government for that level of autism.
At times the 1100mhz msi 390x is within reach of a naked 980ti at 1440p and 4k (it doesn't stand a chance at 1080p). My own 290x with monster cooling will do 1220mhz core but 1300mhz with hybrid cooling? I don't see that happening given the sort of voltage hawaii needs to hit that outside of golden samples.
nah m8 I'm just bored and browsing through the archive of old amd shill threads
>trinity hype threads daily
>HOLY FUCK INTEL IS FINISHED *EVERY* LAPTOP IN THE FUTURE WILL RUN ON TRINITY
>trinity released, AMD quality as usual
>not a single word spoken of it ever again
Naturally there are no Nvidia shills on thread right? I would not expect such deception from Nvidia.