>>43044253 Better question, what went right? Well, they've been moderately competitive since the 4000 series graphics, and quite competitive starting with the 7000 series. AMD APUs are in all 3 of the major next-gen consoles. AMD's laptop market share is a lot bigger than in the past (heck, you can get a NICE AMD LAPTOP! Never before.. it was always Intel for lightweight, good battery life until a couple years back). Lots is going right. AMD just has to cut the bullshit, ie FX series. Focus on what it's good at and ramp up production there, maybe help their vendors with marketing.
Marketing, really, is AMD's biggest challenge. Fucking pay Playstation $10 million to make a limited-edition "AMD Edition" PS4 with some cool game bundle for Winter 2014. Pay half of the advertising. People start talking about some gimmick feature on the AMD edition and then they start buying AMD laptops. etc. It's not about quality of product.. it's about selling it.
>>43044617 A Bulldozer or derivative module is two independent integer cores that share front end resources as well as a partially hyperthreaded float point unit called the Flex FPU. The theory behind the design was increasing compute power in a given die area.
>>43044253 Intel pulled some sinister shit at the apex of AMD's prime and pulled the rug out from under them. And all that happened was that Intel got a minor slap on the wrist, whereas AMD was crippled forever.
Also, Bulldozer was just a bad path. The original architecture was supposed to be in products by 2009 along side K10 IIRC. Instead it got cancelled, and later AMD revived the project and built Bulldozer from the remains. It should come as no surprise then, that Bulldozer and its derivatives are all shit.
They should have just dropped Bulldozer early on and given more focus to their small cats. Instead they insisted on trying to get a cut of the performance market, when they could have devoured the low power market.
>>43045390 AMD never had the rug pulled out from under them, they were never on the rug to begin with. During AMD's peak in the K7 and K8 era intel sill absolutely dominated in terms of market share. It has never been at any a point an intel vs AMD thing. It has always been intel dominating X86 entirely while AMD fights for 10-20% marketshare scraps.
Intel paying off OEMs to not use AMD products was just adding insult to injury, it wasn't like this grandiose act that singlehandedly crippled them.
They had a good thing going with those cheap deneb phenom IIs that would clock up to 4.2-4.3ghz without trouble and half the power usage of a FX-8xxx, at half the price of a core 2 quad Q9550. With bulldozer, they spent half a decade denying single threaded performance was crucial, they've just caught up to the performance level of their CPUs from 2008.
AMD still doesn't have a CPU to compete with the i7 920, which is a 5 generation old intel CPU.
>>43045500 >HHmmm, geeee, I FUCKING wonder why.... Because intel was every bit as much a marketing giant as it was a chip giant. Intel has steadily had prime time TV adds for the better part of two decades, they've had premium spot Super Bowl ads as well. Intel was also THE brand when it came to computer chips, they're the brand that people recognize. That is why they've consistently had such high market share. They're bribing of OEMs is not responsible for AMD's faltering. As a matter of fact AMD continued to actually gain market share for the brief period until intel released their Core2Duo chips, then intel broke away once again and returned to their utter domination of the X86 market. AMD never had a snowball's chance in hell of being the top dog in the market, they never had the money, infrastructure, or talent. The fact that just a few key employees leaving the company had such a huge impact on their future architecture shows you just how rough AMD was running.
>You got it the other way around son. It was adding injury to the insult No, you have it backwards. Before K7 AMD was scraping by. With K7 and K8 AMD had a huge gain in market share, but it was a short lived moment in the sun. Intel didn't cripple AMD's marketshare. AMD's has always had poor market presence because they've never had much of a marketing department.
Intel was bitch slapped by the Athlon XPs and fucked in the face by the XP 64s. The only thing that saved them was the C2D after the Pentium D flop and their bullshit x86 patent which at this point should be voided.
bulldozer failing miserably is what went wrong. its stars based cores even today are actually more powerful than the bulldozer based cores.
for example, a 4ghz deneb core is faster than a 4ghz piledriver core. it takes a piledriver core being oc to 4.2ghz to be faster.
all bulldozer has going for it is 8 and 16 core processors. if you need heavy multitasking and want amd you go bulldozer. if you want 6 cores and under maximum performance and want amd you go stars.
what's sad is that stars has the same single core performance as a core 2 quad yorkfield based core from 2008 clock for clock. with bulldozer being slightly slower than stars has weaker performance than a yorkfield core clock for clock.
amd would of been better off refining and shrinking the stars based architecture instead of trying to build a new architecture from scratch like they did with bulldozer.
with the failure of bulldozer amd cleaned house by laying off most of its upper management and fired its ceo that was involved with the creation of bulldozer. they brought back a lot of former emplyoes that helped create the athlon 64 processor and is developing a new processor that's more closer to stars than bulldozer in its architecture.
>>43046086 You're on drugs, or flat out retarded. The 2.9ghz Llano A8 3850 with no turbo is actually just about 10% behind the 3.8ghz base and 4.2ghz turbo Trinity A10 5800k. In FPU heavy workloads the Llano chip is often actually faster. All that Trinity had going for it was a more powerful and power efficient GPU. Llano had 400 VLIW5 shaders, Trinity had 384 VLIW4 shaders that have higher utilization and better energy efficiency. The GPU is the single are what Trinity actually stands out. The utterly massive step backwards in IPC here is nothing but embarrassing.
>>43046086 it was only a dead end because amd's management team behind bulldozer kept pushing bulldozer over the continuation of stars. it was found out that the management behind bulldozer was hiding performance results and only pushed bulldozer over stars because bulldozer was going to cut production costs down significantly. the entire design of bulldozer was designed around being able to slap MOAR CORES per die space more cheaply along side with more automated design (which one former amd engineer said reduced ipc performance by 20% vs manually doing it). producing a quad bulldozer is far more cheaper than producing a quad stars.
they only killed stars because they wanted to be jews. which is why they fired all the jews behind bulldozer and going back to a design that closely resembles stars more than bulldozer.
>>43046654 Kaveri's IGP has a decent lead over the HD 5200 Iris Pro, and thats with it being horribly bandwidth starved. intel can't compete when it comes to GPUs, the problem is AMD can't compete when it comes to CPUs. Even a Haswell i3 beats the A10 7850k in nearly everything at stock clocks.
>>43046782 DDR4 isn't confirmed for Carrizo, its only confirmed for the server variant of the chip. The package and FM2+ socket can't support DDR4. Companies run tests of things all the time that they never produce, or even have plans on producing.
>>43046692 Well Intel seems to generally add 10FPS per generation on new games, and around 20FPS on older games. A few more generations of that and 750 Ti's will be worthless, then 760's, and so on. The original HD Graphics played WoW and CoD4 flawlessly, and the HD4600 blows the original away.
I have high hopes for the next generation of Iris.
>>43046864 AMD's IGP could literally double in performance just by having fast enough memory, like quad channel DDR3/4. Not a single thing would need to be touched architecture wise and they could gain that much performance for no increase in power consumption. Though on top of all of that AMD actually is improving their GPU arch, so they'll continue to stay very far ahead of intel on the GPU front.
>>43047445 Their new core arch is due out sometime in 2016, and its going in APUs, server chips, and a true successor to the FX desktop parts. They'll be 14nm FinFET parts, and by all accounts should actually be somewhat competitive with intel again.
Keller's new K12 arch and its sister X86 core are undoubtedly going to be wide, high IPC designs. The only things he has ever designed have been wide cores with high IPC.
>>43044504 Marketers are too scared to take big risks. They don't understand that all you have to do is put the name out there on a good product and people automatically associate that with a future purchase.
That's what Beats did and they made a shit ton of money really fast.
On the "vs Nvidia" front: cyrpto mining drove the price up so that nvidia had better performance/price. Gamers bought Nvidia cards because of this, but AMD still made money from the miners. Because the algorithm for cryptomining gets harder over time, it is no longer economically viable to mine, so AMD GPU prices dropped to below equivalent Nvidia prices. Now, AMD has better price/performance. AMD GPUs can lose to some cards at 1080p, but beat them at higher resolutions: AMD is more optimized for 1440p/4K than nvidia. Depending on how soon 4K becomes more viable and how nvidia reacts, this could be part of a major payoff for AMD. Also, the ps4 and Xbone having AMD APUs makes it more likely that AAA games will favor AMD over Nvidia.
On the "vs Intel" front: They simply cannot make CPUs better than Intel can. Intel has more money and is able to make more efficient CPUs through precision, while AMD attempted made a bet and lost. The bet: games would start to support more threads, and have less dependency on single-thread performance. If you look at synthetic benchmarks, AMD has very good performance/price: unlocked 8 cores for less than the cheapest i5? That's great! Unfortunately for AMD, we live in a time where an unlocked pentium is just as good as an i5 in most cases (because multithreading is too hard for AAA developers), so Intel's ability to have better single thread performance beats out AMDs ability to have many cores cheaply.
I'm worried about AMD when it comes to the desktop market. If you look at the Steam hardware survey, only 1/4 of processors are AMD and for every 3 AMD video cards, there are 5 NVIDIA video cards. This includes AMD's integrated graphics.
It is not a good thing for consumers if AMD gets practically pushed out of the pictures. This just gives the Jews at NVIDIA and Intel mroe headroom to raise prices if nothing can compete.
>>43048423 AMD is dying on the GPU front because AMD is completely unable to employ jewish tactics to the extent nVidia does
they're pushing PhysX on a whole lot of crap recently and its been made to purposefully take a shit on your cpu if you try to run it without an nVidia card but on the other side AMD's TressFX can be used with nVidia's cards
i guess they dont know the concept of fighting fire with fire
>>43048629 AMD doesn't go in for that proprietary bullshit. TressFX is just a simple DirectCompute function that leverages a totally open source physics library. Its really funny that GPU compute is complete vendor agnostic, and Nvidia still wants to push its own inferior proprietary standard.
>>43048661 It will be. Its in a closed beta now, likely because of their close ties to Microsoft and DirectX, but one way or another everyone is going to end up using it. There are like 47 developers in that closed beta now, and one of them is Rockstar. GTA V is going to be a Mantle game, so they obviously know something the general public doesn't.
>>43048767 >But will games end up needing that power in the future Yes. Games aren't getting any simpler, and will continue to require more and more calculations which can't be offloaded to the GPU, and games will always need better and better processors as time goes by.
>>43048677 >>43048752 When people start writing HSA renderers and engines. The ultimate goal is to have the same sort of write once run anywhere compatibility as JAVA. Correctly compiled code could run on an X86 desktop, or a future ARM based desktop.
As ARM continues to advance there is nothing stopping the IP from reaching parity with X86 in performance at the same or better levels of power consumption.
>>43048837 >what is mantle Something that will never catch on, and has nothing to do with my post. >implying using mantle will make any of the underlying calculations "simpler" >implying the hardware won't have to perform those calculations anyways
>>43048809 >why devote manpower to develop something to throw it into open source for your competitor to take advantage of
Ever hear of a noble cause? AMD seems to take of the reigns of that horse quite often. Could be that they simply lack the market share in order to go the proprietary route, and don't want to risk losing any more market share, but they've always been about the whole open standard thing. Back before Nvidia ever purchased Ageia AMD was working on developing OpenCL based physics accelerators, and they're in wide use today.
>>43048857 A low level and highly threaded graphics API.
>>43048888 >one company using it for one game suddenly implies that it will be the Next Big Thing >implying this same idea hasn't been tried out before by other companies >implying a new and unkown proprietary standard can beat a long-standing superior open system like opengl
Until Mantle gains more exposure so we can make an accurate prediction, you are just spewing shit out of your ass.
>>43048900 Cortex A57 cores are more powerful than Jaguar cores. AMD's 25w Opteron A1100 has more CPU power than the PS4 or Xbone.
>>43048922 GTA V is a mantle game, thats a fact. Nearly 50 developers are signed on using it for their in development titles. Mantle matter of factly is the next big thing. YOU are pulling things straight from your ass out of pure unadulterated butthurt, you giant baby.
ok so this Mantle thing looks like its just shifting workloads for more efficiency that shifts the pressure to upgrade parts from GPU+CPU to more on GPU ... but AMD makes both CPUs and GPUs And more GPUs are bought from nVidia than AMD
>>43049301 Yep, and so do a lot of people. Intel actually helped keep them in the spotlight during the 90's. Another huge marketing icon for them was their plushie clean room worker, most people just thought it was an astronaut. I remember back in the day every single computer store from mom and pop places to CompUSA had at least two of these sitting on or in a display case. You couldn't go anywhere that sold processors without seeing one. Even Bestbuy had them all over.
Carrizo is Kaveri done right with stacked eDRAM on the die.
That's right, dedicated graphics memory right on the processor die. They are literally putting dedicated graphics on the APU.
Also thanks to HSA being more developed, the CPU can use that eDRAM space as well like a glorified L3 cache that's giant and fast.
Skylake better drop bombs, because AMD's about wind up the Warthog and use their A10 to carpet bomb Intel's shitty attempt at graphics.
Carrizo's CPU performance increase will be larger than their past performance increases thanks to having such a huge ass cache, if properly utilized their little module idea may finally start to pay off in a big way.
If they can manage to slap 4 of these modules on and make an "8" CPU core APU with decent graphics, I guarantee they'll be the only name in gaming laptops and mid-range desktops which are cost effective and fantastic.
I'm talking i5 Haswell performance with dedicated graphics for the price of an i3.
Intel's Broadwell delays don't boad well.
AMD is poised to strike the Intel demon at the throat.
>>43049990 Carrizo does not have stacked eDRAM. HBM is not eDRAM, and it is not on Carrizo's die. HBM can go on the same package as a die, its not being fabbed on the die itself. Carrizo doesn't use HBM at all as a matter of fact, current HBM modules are too low in density to replace conventional DRAM yet. Having a separate cache just for the GPU breaks HUMA entirely, so its simply fucking retarded to buy into that for a second.
Stop repeating shit from fucking retarded Indian run clickbait websites. WCCFtech is a goddamn rumor mill that literally fabricate stories for ad revenue, that isn't even a "rumor." Its something that they made up entirely on their own.
The die size and package of Carrizo won't even leave enough room under the IHS to fit a single HBM module, let alone four of them.
>>43050087 >trying to imply, very poorly, that I'm an intel fanboy
No, you're just a shit eating retard. The whole point behind the memory architecture is that it eliminates the need to copy data from the CPU's memory space before the GPU can execute it. This is where the performance benefits of HSA come from. Less time is spent moving data, so it gets executed faster. Memory operations in reality can actually take hundreds of cycles to complete, and in a conventional system moving data from the CPU to the GPU requires several cache writes and flushes. Believing even for a split second that this crucial corner stone of HSA would be abandoned makes you a literal retard.
>>43050161 and this is currently in production HBM straight from Hynix. They are making 1 gigabit modules, thats just 256 megabytes for you plebs. Magically fitting four of these modules under the IHS would only give you 1 gigabyte of RAM.
The reason why Nvidia's Pascal and AMD's future GPUs that make use of HBM are being held off until 2016 is because they are waiting on high density HBM2 modules.
So really, your ass could not possibly be any more blasted right now.
>>43050308 Be more butthurt, you fucking retard. You're spreading a complete fabrication by the same ground of Indian shit piles who said Kaveri would use SMT and have 4 threads per core. They do nothing but make up bullshit to get ad revenue from tech illiterate retards, and you're clearly one of their target audience.
>>43050244 You really wouldn't need any more than 128MB though.
Right now with an A10-7850K, the GPU side still reserves an amount of system ram for what windows considers to be VRAM. If AMD simply made that a 128MB framebuffer with all extra ram (ie. anything a program wants to store in gpu memory) straight from system ram via huma (games devs wouldn't need to change a thing since AMD's GPU drivers could do the huma functions here) that would speed up the GPU side shitloads without breaking any current part of huma.
>>43050554 Thats not how memory works, you're talking about having side port RAM. To make that work you need to write things specifically to use it, or you need to have logic dedicated to handling your cache hierarchy. It would break HUMA, and it will never be done.
>>43050853 Having little sockets to replace DIMM slots ala HMC modules is the likely path for HBM on the desktop. Putting the modules directly on the CPU/GPU package increases costs which is perfectly fine for an enthusiast GPU, but it doesn't fly for a CPU. Not to mention the fact that a desktop system still needs expandability as not everyone needs or wants the same amount of RAM.
>>43050998 >AMD codenames the chips something badass like Storm hammer, Raven hammer, etc
>>43051039 AMD doesn't produce separate dies for mobile chips. Llano, Trinity, Richland, and Kaveri mobile chips are all binned desktop chips. Not saying that wccftech shit has any credibility though, not that it matters. AMD's issue with caches has always been latency, and speed. Well that and their density. If they could manage to unfuck their caches/IMC then they could save a lot of die space and get a sizable performance increase at the same time.
>>43048981 Mantle spreads out the CPU load over more cores, benefiting many cores with lower IPC over for cores with high IPC. Which means better performance on AMD CPUs, making a chip like 8350 competitive with 4770k even in gaming. Since AMD is unlikely to manage on-par IPC with intel in near-future, it'll be a net win for them if Nvidia implements it.
>>43051177 >With DirectX12 launched and OpenGL 5.0 launching next month, both with low overhead, Mantle is basically dead on arrival as predicted Mantle was made to replace them, it's fundamentally superior and something game devs have been waiting for years
>>43048957 According to the GDC conference Approaching Zero Driver Overhead in OpenGL: Opengl is faster when using the latest methods and extensions and has support for Intel amd and Nvidia... while mantle only works on AMD GCN cards under windows.
>He basically concedes here that Mantle is NOT a generic API, and is cutting a few corners here and there because it only has to support GCN-based hardware (after all, if both DX12 and Mantle were designed to be equally generic (as the original claims about Mantle were: it would run on Intel and nVidia hardware), then there would be no corners to cut, and no extra (measurable, note that word) CPU overhead to avoid. The only thing they are avoiding here is the abstraction overhead that is in DX12, which allows it to support GPU architectures from multiple vendors/generations.
>So, that leaves virtually none of the original claims about Mantle… We’ve already seen earlier that Mantle would not be a console API, and now it is not going to be a generic API either, but it will remain specific to AMD.
From 2008 to 2012 AMD has been re-using old designs and bolting on more cores because they had no smart people to design new, efficient architecture and those they did design completely failed in every way. As a result they stopped competing in the high end/gaming market and focused on midrange and budget builds.
ATI has been keeping them alive since 2009. Without them they'd have been in the shitter several times over.
Recently people have been coming back to AMD and they seem to be improving.
>>43052713 Except this is untrue. Back in the p4 days, AMD processors blasted the fuck out of intel p4s. But intel jewed the OEMs to get them to not buy AMD chips. AMD sued, and a while back they finally won the lawsuit and won a couple billion dollars. A drop in the ocean compared the the damage they sustained from anti-competitive practices.
Then core2duo came out and put intel in a strong (but not overwhelming position). The Phenom I sucked ass, but the Phenom II were pretty good, and competitive. But then sandy bridge happened and it was just over. Intel dominated.
>>43052759 > Graham Sellers (AMD), Tim Foley (Intel), Cass Everitt (NVIDIA) and John McDonald (NVIDIA) will present high-level concepts available in today's OpenGL implementations that radically reduce driver overhead by up to 10x or more.
And there's some benchmarks that you can test too.
So a presentation where the three major vendors participate actively seems pretty unbiased to me.
I am excited to see these little cubes and the sockets on mobos. Just pop a few in and bob's your uncle.
>>43050938 Cache size is not relevant in a sense of "More is always better". You could drop 2MB per core on a design that small and may find that half of it is relatively useless for the intended workload. Things like efficiency of the branch prediction units and the pipeline layout drastically affect how much cache is needed, or in this scenario how much could simply be wasted.
Basically a 15w mobile part is more than likely not powerful enough to need that much cache, while a 65w desktop part could easily utilize double.
>>43044253 My only problem with them is that they treat their customers like complete idiots. >go to drivers download page because sudo apt-get doesn't work >auto select based on your hardware section >only four actual downloads are on the site, two of them for Linux and one is a Windows beta Are Window babies seriously this dumb as to where they need a help section to hold their hand through a download?
It's actually made for modern GPUs by modern developers.
OpenGL's state model doesn't fit our current hardware and drivers very well, leading to shitty code in both applications and drivers as huge cognitive effort is spent on mapping basic compute tasks into a huge clusterfuck of a state machine and its objects within objects, in both directions. Once from an application to the OpenGL state model, then from the OpenGL model to the hardware.
Any efficient driver is full of bizarre heuristics, like trying to guess if we need to recompile the shaders since we need to draw something and arcane state switch GL_ARB_REVERSE_POLARITY is not in the position it was in when the shader was compiled. Maybe the shader doesn't interact with it? Let's inspect the entire state again and see, then recompile if we have to. Yes, the driver has to do this all the time.
The API for managing this state is full of antipattern-ish crud like the infamous bind-to-edit and a general lack of type safety even below 90s' C language standards. It's difficult to write code that doesn't create or suffer from side effects, making the crud hard to hide in a reusable manner.
The group managing the model's specs is a typical aimless committee that barely puts any effort into compliance enforcement, leaving every GL implementer to do limited testing against their own interpretation of the specs.
It's hard to imagine a modern API being worse than this in any way.
>>43053823 'kay, the water definitely looks better here. Smoke's still has problems right next to the stack, but yeah, looks bretty cool. I guess it's hit and miss when it comes to how the devs handle it. Borderlands 2 just looked bad, and AssCreed was okay. Still sticking with my 7970 though.
Nvidia purchased ageia back in 2008 for around 150 million dollars and then keep developing it to add more features and further optimize the code even porting it to mac and linux, creating Nvidia GameWorks.
Sadly Gameworks is closed source and only works with nvidia cards so games that support gameworks used it only for some eye candy here and there but not as a core functionality cause they need to support the rest of the vendors without the game looking completely broken.
So if somehow nvidia decides to opensource gameworks the resulting games could be amazing but what's nvidia profit in open sourcing a technology when they've spent more than 150 millions dollars?
>>43053885 Physx only calculate physics not how it looks, that should be done by the game devs.
>>43054433 >photo realistic particles and water Yeah, but I also don't want it to look like it was done in OE Cake, where everything is hydrophobic as fuck and blobs up. >calculate particles, water, fur So, calculations... OpenCL?
>>43054737 >>43054766 We need a FOSS alternative to this, something that doesn't inhale poo through a straw. Maybe set it up to be a drop-in replacement for PhysX. Didn't someone somewhere say that APIs couldn't be copywritten or something?
>>43054791 >Didn't someone somewhere say that APIs couldn't be copywritten or something? I'm not sure... I thought is just for developers writing for an existing api...
>We need a FOSS alternative to this Well bullet physics is Open source and they're working on gpu acceleration rigid bodies works pretty well... some donations should be welcome. http://youtu.be/IPayi38vQws
Anyway they are far from what gameworks could provide for now.
1. Nvidia decides to Open source gameworks. 2. A big company decides to create an Open source alternative. 3. Somehow one open source alternative managed to get the same level as gameworks without too much money investment. 4. Someone decides to make a millionaire's donation (~100 millions) to bullet physics or similar.
I have heard that amd wants a foss gameworks alternative but I'll prefer that they managed to fix opencl and opengl first before trying something else.
Nvidia open sources gameworks... mmh sounds cool for the next April fools.
>>43048752 And games are getting less CPU bound by the day. If physics can be run alongside graphics on the GPU, there's really not much left for the CPU to do. Certainly nothing that can't be done by the future equivalent of a cortex a9 quad-core.
>>43048865 Yes, it will make the underlying calculations simpler. D3D and OpenGL has a ridiculous amount of overhead causing the hardware to be laughably under-utilized (technically, a lot of the work done is redundant, so it's not that the silicon is idle, it just doesn't do any useful work most of the time), which should be obvious when literally every AAA needs a special-snowflake patch. Mantle fixes all this. Half of the point of Mantle is to detach the graphics workload as much as possible from the CPU, to the point where you'll get the same FPS on the most expensive FX as on the cheapest athlon.
You are right. The problem is retards who compare two year old Piledriver to 6 month old Haswell and then go "wow, AMD sucks ass at efficiency and performance!"
Their GPUs are great. Their drivers have gotten way better. They have good APU wins in consoles and there's rumors AMD got the 3DS successor as well. Mantle is taking off better than anything Nvidia has introduced like PhysX or Gameworks.
Piledriver holding up so well on HEDT for all this time is an accomplishment.
People were hyping bulldozer do much, then it was shit compared to sandy bridge, especially at power efficiency. The only area it was better was multithreaded apps but very few consumers use such programs, so Intel's CPUs are simply better these days.
>>43044617 What AMD calls a "core" is actually only a ALU cluster (some call it a "integer core" (which is false). The FX 8350 is a 4 CMT core. CMT is a technology which duplicate certain parts of a core. CMT is in use to increase throughput and be spaceefficient. The frontend (they part that predicts, fetches and decodes the complex instructions) and the internal cachemanagement is starving the backend (execution stage) for resources when under heavy work. So if you have two heavy threads running on a module, on of those threads will suffer a performance penalty on ~20%, due to the frontend and internal cachemanagement been bad.
>>43046692 The iris pro are actually doing very good against the kaveri IGP. (The kaveri IGP however have the advantage on higher resolutions). I remember reading that past 2133MHz kaveris IGP doesn't scale that well, that could be because of the memory controller bottlenecking (which can be slightly overclock and the internal memorysystem (which sadly cannot be fixed). AMD really needs to fix their memory handling in their future designs (which I think they will at their next x86 architecture.
Remember that intel is pulling up on AMDs IGP rapidly. The iris pro is essentially twice as big as the HD4600 (40 EUs from 20 EUs), and broadwell should increase it by 40% and so should skylake. AMD aren't in the position to do the same, without having to redesign their IGP architecture.
>>43046689 HSA is only functional in certain workloads (where GPGPU and the SIMD cluster fails). I doubt Intel will join the HSA (which is an open foundation), but will develop a similar technology.
>>43048316 AMD tried with a whole new architecture design. A design that relied to much on the software. This is the one of the biggest flaw of the whole modular design. It relies to much of the software to utilize itself,
>>43048423 Intels pricing scheme have remained the same throughout the entire core series. If prices rise, people will wait longer before upgrading. Developers are coding for the "general" hardware, so if the hardware remains the same, developers will develop their software for that, so that the hardware will be able to last even longer before the need to upgrade.
Intel could potentially increase their pricings for their higher-end products with an additional ~20% without suffering any loss.
>>43048451 I second this. A company will do anything for profit. Choose the best product for your needs and usage, and don't be stuck in the "I only buy Intel ( or AMD or Nvidia)".
>>43048557 However the GPU market is shrinking. The need for a dedicated GPU becomes smaller and smaller as IGPs performance rise. Once IGP can handle 4K, the GPU market will be dead for anything else than GPGPU.
>>43048677 CISC will be a more favor choice for desktops for a quite a while. Maybe in 10-15 years when ARM will have better performance. CISC vs RISC is a much more dramatic fight than Intel vs AMD.
>>43048820 Implying that the SIMD cluster on todays processor aren't well fit for gaming? HSA will not have the great advantages in most scenarios for games.
>>43048846 Todays "consoles" aren't meant to be a "gaming only console" but a multimedia center-ish. They could do like in the PS3-era and go with a more costumize CPU, which would have lead to more performance but will require more dedicated coding.
>>43048878 If developers would code a more efficient piece of software it could have solved a lot of issues. However DX11 is still cripled.
>>43048879 Yes. It will have a sister architecture (which are still unamed to the public).
>>43056259 depends on which workloads. They would need to successfully utilize all 8 ALU cluster to have similar performance to an I7 (if it is parallel and predictable and not to complex instructions). and we aren't even mentioning SIMD workloads.
And to answer OPs question: AMD made a design that relied to much on the software to fully utilize itself. Also the design is REALLY unbalanced.
However their jaguar architecture is a overall much better and much more balanced design than their moduler-designs. They just need to have a better performance per watt, because that is what counts in that market.
Overall AMD made some bad decisions, and are now recovering.
>>43056156 The Iris Pro is an extreme situation, it's literally a solution Intel can only put into expensive high end chips, unless they can find a cheaper substitute for the very fast memory. Compare the Iris (literally the Iris Pro sans the very fast memory) and AMD's graphics advantage is pretty clear.
Plus apparently the Iris Pro drops frames like crazy. I'm not saying Intel aren't catching up, but they clearly have a few more generations to go to catch up without taking expensive shortcuts.
>>43056712 The iris pro is an "exclusive" product. Intel would not place it on lower-end systems, as it would compete against their own products. (also why the locked processor feature certain ISAs that the unlocked doesn't (not including DC)).
I expect Intel to have similar IGP performance with broadwell (again, only in their higher end products). Unless AMD release a new IGP design with their upcoming x86 architecture.
>>43056156 >>The iris pro are actually doing very good against the kaveri IGP. (The kaveri IGP however have the advantage on higher resolutions). AMD's APUs beat Intel's iGPUs in basically every situation aside from tomb raider which is mainly CPU based.
Well, if you have GPUs that are bottlenecked by DDR3 and then someone throws a gigantic, super expensive cache to use as VRAM on a chip, it should be no surprise.
>>43057078 Nope. Kaveri is bandwidth constrained and those shaders can't get data fast enough. Improving Carrizo shader performance will not do much.
Which is why you keep seeing low TDP products being thrown around for Carrizo. It's going to be Kaveri levels of performance at much lower TDPs.
Much like Steamroller APUs were not impressive when comparing 95w SR to 95w PD, but when you drop to 35w SR looks fantastic.
28nm bulk is awful for HEDT parts so AMD is avoiding it for now. We won't see AMD reach into the 95w+ performance segment with exciting until we get 20nm or 14nm FD-SOI, probably with finFets too. Which is sadly probably going to be when we get K12 and the x86 sister core.
>>43057078 >Not to say it will be terrible, it'll probably be a nice 20-30% bump over Kaveri, That would probably bring them up to par with sandy/ivy with comparable or better power usage. That's pretty impressive imo and I wouldn't mind that at all if the price was right.
>>43057161 >>43056156 > I remember reading that past 2133MHz kaveris IGP doesn't scale that well, that could be because of the memory controller bottlenecking (which can be slightly overclock and the internal memorysystem (which sadly cannot be fixed). AMD really needs to fix their memory handling in their future designs (which I think they will at their next x86 architecture
Nice, now can you show me some official benchmarks where DX11 on Nvidia is 64% faster with the special drivers?
I am not arguing that Mantle is always faster, just that it can actually be faster.
It should not be that hard. I found the Star Swarm benchmark showing what Mantle is capable of in 5 seconds of searching google. Surely you can find a DX11 benchmark for Nvidia that shows the same sort of improvements if Nvidia DX11 and Mantle are on par with each other. Or is it only in marketing slides that you can see big performance gains with DX11 Nvidia drivers?
The only thing you get when you google for DX11 Nvidia benchmark improvements are either graphs showing at best 10% increase in performance or marketing slides talking about 64% performance improvements.
The kid has enough to chew on. He probably has an expensive Nvidia product and Mantle coming along and letting a system with a cheaper CPU and much cheaper graphics card out-perform him is hard enough to swallow.
I'm sure he will enjoy his 5% performance increase in DX12 in Windows 9 Cloud Edition as he tells himself it's just as good as Mantle.
>>43057459 http://www.pcworld.com/article/2365909/intel-approached-amd-about-access-to-mantle.html How about you pull your head out of your ass and take 2 seconds to google a simple phrase? It's old fucking news by now.
>>43057464 If they weren't planning on implementing it they would have just ignored it, what other reason would they have to want access? They don't make games, and trying to reverse engineer it to get it out faster under a different name would be a similar disaster to gsync.
Mantle exists not because AMD wanted to create a new API out of the blue, but because Microsoft would not give game developers features they wanted out of a graphics API.
They also wanted something that could be cross platform and not a disaster to work with, like OpenGL.
Game developers approached Nvidia about making a competing API to DX and Nvidia refuesed. AMD had no problem doing it though.
Mantle scared MS enough to create DX12, which rumors are suggesting is just a copy paste of Mantle with a few tweaks to get it running on everything else. Which is fun because it segments the market into either DX12 + W9 or Mantle + GCN on other versions of Windows (and probably Linux and OSX eventually as it's possible).
The performance thing is not even the main goal of Mantle. It is a nice perk of owning a GCN card. But the main goal is an API that is easy to port to other platforms (xbone, PS4, Windows, Linux, OSX, next Nintendo handheld, etc).
>>43057802 Companies are constantly asking for access for something they MIGHT use in the future. I too believe they will support if, if mantle continues as it does. So in case they will use it, they already have access for it, so they can implement it faster.
>>43057914 Mantle exist because AMD want to have more control over the software. I have previously mention how some of AMDs biggest flaws are related to rely on software.
>>43057938 Bigger companies does it too. If the see something that might have potential. They will mostlikely wait for it to become a "real" competion against DX and then implement it. The earlier they have access they sooner they could put out support for it (incase it would become something big).
>>43057858 Pick an nvidia and an AMD card of the same price range test star swarm on both compare results...
You are going to see similar results dx nvidia vs mantle amd and a lot better results mantle amd vs dx amd.
They should have created some opengl extensions to push it on later opengl version as standard this way it's easier for the rest of vendors to support it and it's opensource from the beginning.
P.S.Opengl new methods gives you 10x more draw calls mantle is supposed to give you 9x times more draw calls (GDC conference where amd participated actively too).
xbone, PS4, as far as i know both refused to use mantle Windows here it werks. Linux we'll see. OSX, next Nintendo handheld, don't know if they'll support it.
>>43057938 >Big companies don't ask for access to things from other companies without a good reason.
>Intel, for its part, confirmed that it has indeed asked AMD for access to the Mantle technology, for what it referred to as an "experiment". However, an Intel spokesman said that it remains committed to what it calls open standards like Microsoft's DirectX API.
>"At the time of the initial Mantle announcement, we were already investigating rendering overhead based on game developer feedback," an Intel spokesman said in an email. "Our hope was to build consensus on potential approaches to reduce overhead with additional data. We have publicly asked them to share the spec with us several times as part of examination of potential ways to improve APIs and increase efficiencies. At this point though we believe that DirectX 12 and ongoing work with other industry bodies and OS vendors will address the issues that game developers have noted."
Big companies wants information about what the other companies are doing and how it works.
>>43058083 >You are going to see similar results dx nvidia vs mantle amd and a lot better results mantle amd vs dx amd. But that's wrong. See the benchmark posted here >>43057625 And you'll see that mantle gives a SIGNIFICANTLY higher performance boost.
I know it might be hard for you but you're actually going to have to do math and look at the percentage performance boost, not just raw numbers. The only thing the raw numbers show is that the 750ti is better than the 260x.
>>43058049 >Companies are constantly asking for access for something they MIGHT use in the future. I too believe they will support if, if mantle continues as it does. So in case they will use it, they already have access for it, so they can implement it faster. Except it's going open soon(tm). If they didn't want to implement it immediately they would have just waited.
>>43057914 >Mantle scared MS enough to create DX12, which rumors are suggesting is just a copy paste of Mantle with a few tweaks to get it running on everything else. Which is fun because it segments the market into either DX12 + W9 or Mantle + GCN on other versions of Windows (and probably Linux and OSX eventually as it's possible).
Actually, according to Huddy MS said to AMD - before Mantle was created - that if AMD could prove such a thing is workable, then MS will play along.
ITT: retards argue about starswarm results and don't even realize its not a benchmark. 2 runs of the same card with the same drivers can have a 50% FPS delta simply because the objects are generated at different times. Any site retarded enough to bench it don't know fucking shit.
All anyone has to do to "optimize" a driver for it would just mean they keep the ship count from going as high so the FPS goes up.
>>43058360 You are not seen the point. The point is that IF mantle becomes a big competition for DX, Intel will be able to put out support for mantle much faster. It will be like a "backup"-plan, so in IF mantle doesn't become anything, they haven't wasted as much resources as they would to support it early on.
>>43058406 What the fuck kind of logic is that? You're discounting an insane amount of variables to declare nvidias implementation of directx the winner, basing your assumption purely on pricing in *your country*.
Do I have to explain why this comes across as quite possibly the stupidest, shilly-est post on /g/?
As a semi-enthusiast user who originally started with computers and would always use AMD/ATI cpu and gpus, I have to say that I'm officially finished with AMD at this point.
I gave up on AMD processors in the golden age of intel overclocking (bloomfield era) and wont be buying another AMD GPU. My recent tri-fire (3 x 290x) setup is pretty disappointing when compared to what I could have if I just got two 780ti's for 2/3 the price. Or titans.
Maybe this is how fanboys are made.. the hard way. Sigh..
>>43056383 >Once IGP can handle 4K You'll be waiting at least 5 years, unless something dramatic happens with fab technology >GPU market will be dead Interesting but analytically flawed prediction. There will always be a core group of power users, semi- and professional users looking for the best crunch time, and gamers who want to push the boundaries of realism. Some unified super architecture which can handle everything is not going to come around for a long, long time.
>>43058588 You're comparing completely different architectures, one of which is significantly newer than the other. The 750ti is also quite amazing because of the amount of optimisation that has gone into the architecture, allowing it to do a lot more in a smaller envelope.
If you don't understand this, then you should be lurking in this thread, not posting in it.
>>43058930 In something legitimate, not just drawing a DE and playing movies. When we talk about comparing iGPU to dGPU the target points are always going to be about video games and content creation. Remove head from anus.
>>43058873 I know it will take time for IGP to handle 4k.
Flawed how? I did say "the GPU market will be dead for anything else than GPGPU". Once the market for dGPU for gaming will lower neither AMD nor Nvidia will make a GPU series for gaming. The market will be to small to support that. And again I know it will take time, however it is a dying market. Nvidia are desperately trying to enter the SoC market.
>>43058961 Not entirely. CISC instructions are still much more complex than RISC. But by true definition RISC doesn't exit anymore (or atleast not anything big).
>>43059074 >Not entirely. CISC instructions are still much more complex than RISC. >But by true definition RISC doesn't exit anymore (or atleast not anything big). The decoder in the current x86 cpu to conver CISC instruction to RISC is insignificant and very efficient. The difference isn't going to be there from any IEEE study done in the last 10 years,
>>43059972 Stupid? Are you really that mad at somebody who decided for good reason to no longer use AMD products? I dont even know why I'm replying to a faygit. >price/performance Single GPU? Yes. Multi GPU? No, it sucks.
>>43060150 But higher DDR usually comes with higher clock speeds, albeit higher latencies. Seeing as APU's benefit greatly from clock and are barely affected by latencies one might assume that DDR4 is going to be awesome
>>43060008 AMD doesn't have a new cat core, they're dropping them completely. The "Puma+" core which is really just Jaguar is continuing into products through 2015. The Skybridge platform will use A57 cores or Puma+ cores. 28nm Steamroller is remarkably close to the Puma+ cores in die size, and power consumption, but is much more powerful over all. A module in Kaveri is smaller than the four core CU inside of Kabini/Beema.
>>43060031 Every major fab is more than ready for volume 20nm production, the issue is no one wants to pay for it since the cost is high and the gains are minimal. Its only economical for small chips, which is what AMD is making on the process. They taped out a couple 20nm parts last year, likely ARM chips.
>>43060112 The server variant of Carrizo is slated to use DDR4 next year, its unlikely that the desktop chip will unless they have a whole new socket and package thats backwards compatible with FM2+ as well.
>>43060169 The benefit to DDR4 is having 1 DIMM per channel, and thats something that will be completely over shadowed just one year after DDR4 actually hits the consumer market.
>>43060095 Because the name means anything. Spreading misinformation? I made sure, I included that it was something I read. Did a quick google search and ended up with this: http://www.phoronix.com/scan.php?page=article&item=amd_ddr3_2400mhz&num=2 (I wanna clarify I have no idea if this is real or not, but it does support my statement (if real of course)). I never said they didn't scale above 2133MHz, I said they just didn't scale as well as they did from lower memory frequency.
If that was the only point that I was mistaken with (also the only point where I said it was something I had read) (I read a very informative post about the kaveri architecture, I can try to find it). Knowing from their architecture, they do have a bad internal memorysystem.
>>43060367 Be more butthurt about absolute facts. Every single thing I posted there is factually sound. http://www.bit-tech.net/news/hardware/2014/05/06/amd-skybridge/1
>Announced at a press event late last night, the new roadmap is highlighted by Project SkyBridge. Due to launch next year, SkyBridge will offer a new family of 20nm accelerated processing unit (APU) and system-on-chip (SoC) processors which offer a choice of ARM Cortex-A57 or Puma+ processing cores
Socket FM2+ does not have the pins to handle DDR4 Only the server variant of Carrizo was ever rumored to have DDR4 support because it will use a completely different package and socket The only cat core APU coming out next year is die shrink of Beema which uses the exact same Puma+ cores, which are the exact same Jaguar cores used in Kabini that launched way back in q2 2013 and are themselves just a minor scaling up of the Bobcat core which was designed before 2010. All development of cat cores has completely stopped. Bobcat got a single upgrade, and the only thing put in place after that were power saving features that didn't affect the core arch itself.
>>43044253 when intel artificially crippled AMD's sales so they earned less money they could have. but everything will be good in late 2015 as all the glorious people from the K7 and K8 times are back at AMD and the new architecture finally hits...
>>43061391 >>43061344 it's like there are no games that exist that make the audience go "wow", even without that physx horseshit sometimes, people seem like the act like sheep on purpose, i don't get it
>>43060490 >late last night who the fuck cares about when this shithead author read about it? >what are timezones many times i feel like these retards writing articles deserve to have their dicks slapped off
Come on, games are going to evolve there's going to be fur, fluids, physics even raytracing... you can make good games without it but i want to see more realistic games and all this technologies are a step forward.
From now there's no open source alternatives for some of this things but it's pretty cool seeing what an actual pc can achieve.
All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the shown content originated from that site. This means that 4Archive shows their content, archived. If you need information for a Poster - contact them.
If a post contains personal/copyrighted/illegal content, then use the post's [Report] link! If a post is not removed within 24h contact me at [email protected] with the post's information.