INTEL BANKRUPT & FINISHED. AMD ZEN WON, ZEN JUSTed INTEL
we talking high end east asian or caucasian models or absolute shit-tier african garbage?
> 1.1B cores do no good due to apparently broken horizontal scaling and horrible per-processor serial throughput
Try writing a parser for two-level grammars.
A human can learn them in a day, but there's no known algorithm for computers. Even brute forcing quickly runs into infinities of infinities.
>Intel® 3D Tri-Gate™ transistors will be able provide higher clock speed and power efficency at the same time
Altera is jewish. So it doesn't surpirise me
cpu != dsp
right now intel biggest enemy isn't AMD, but ARM devices with their 10-15 hour battery life
intel entry Core i3 laptop CPU beat AMD high end A10, A12 ones so instead of focusing on giving more power to the i5/i7 series, they are making them more energy efficient so they can put them on smaller devices and get higher battery life
right now intel has core atoms for tablets, which has ok performance but is not powerful enough for running multiple desktops applications smoothly and Core M which is also battery efficient, doesn't require a fan (but still a relatively large device for passive cooling 12"+)
furthermore they don't really need to focus on increasing core performance, since their Core M/i3/i5/i7 on laptops only use 2 cores, eventually when they get lower power consumption and heat efficiency they can make them quad cores, theoretically doubling the performance
because of ongoing disputes about workings of certain part of coretex you can argue that signals are discrete.
at least that's a simplification that most neuroscientists seem to believe in.
axon impulses have discrete (i.e., only one) levels with quasi-fixed minimum intervals, but arguing that the processing itself is anything remotely discrete is almost plainly incorrect.
watch out, you'll summon the hyper-defensive semiconductor faggot who will insist that things are still great even if the industry is collectively lying about process node size names.
> arguing that the processing itself is anything remotely discrete is almost plainly incorrect.
look at what you just wrote, it doesn't make any sense from practical standpoint.
you basically want me believe in tulpas
>being a serious competitor
Choose 1 and only 1
AMD is totally irrelevant at this point for either Nvidia and Intel.
They both dominate their respective markets with over 80% shares.
AMD is barely surviving at all.
AMD fans are the biggest retards out there right now in the tech scenes.
discrete level impulses don't imply that reactions and overall processing are discrete.
even simple exponential decay thresholding, which already is one of the simplest response models, is completely analog.
>don't imply that reactions and overall processing are discrete.
>one of the simplest response models, is completely analog
if you would want to strec definition even a square wave fully "analog" as per fourier series
M8, the NEETs on neo-/g/ are only happy when they can see CPUs and GPUs getting conspicuously faster and cheaper.
Unless you can make that happen, you should probably just give up on arguing with them.
Failing that, you should attempt to sound less like an insecure ass, unless your intent in being here really is just to condescend to a bunch of /v/ refugees and IT undergrads.
>neo-/g/ are only happy when they can see CPUs and GPUs getting conspicuously faster and cheaper.
bullshit. they're only happy when the chips from THEIR favorite company are getting better.
this entire thread's premise is probably AMDfags or Nvidiashills or something laughing at Intel's expense.
I wouldn't even say cloud computing, it's mostly web services at this point.
Not every application has such effective horizontal scaling potential as well as processing/communications latency insensitivity.
Intel has never done anything "innovated" in their life as a company.
AMD has always been the company pushing the boundaries with x64, multi-core, etc.
Intel is just a company that's good at copying designs from AMD and other journal papers that actually show off innovation and then improving within those boundaries.
I'm scare that Zen won't quite meet hype and that Intel will continue to push out shit than nobody really wants.
> Kaby Lake is just SkyShit with added HEVC codec and ThunderJew 3.0
> Cannonlake comes out on New Year's Eve 2017, 10nm shrink still ends up being worthless
> Zen flops, AMD sold for pennies to the chinks or saudis
> Intel jacks up prices even more overnight just because they feel like it
x86_64 was hardly innovative. Look at MMX, Itanium, on-die memory controllers, 3D transistors, or even go back and look at the development of the 8086 CPU. AMD doesn't even have a fabrication facility anymore.
What the fuck?
Isn't intel ALREADY throwing their most power consumption parts of their cpu onto the motherboard so their processors can look like it doesn't use much energy?
And then they pull this shit...
No because until we get magnitude increases in clockspeed that's irrelevant compared to architecture improvements.
Clock cycle speed has largely been irrelevant for a long time. It's why a 1.6GHZ U-series is faster than a 3ghz c2d
That you think AMD has own down Intel on anything. AMD still has to lie about the number of cores to even come close to matching Intel.
Also, there is a reason that companies like Supermicro and Dell have dropped AMD CPUs from most of their product line up.
Zen will give a theoretical IPC increase on the same process node. ON THE SAME PROCESS NODE. /g/ forgets that Zen will also be a huge leap forward because of the die shrink (32nm to 14nm iirc) and this is in addition to the 40% IPC boost. Then you're forgetting that Zen will have actual hyperthreading this time, not the old BS from Piledriver. Then you're forgetting that in addition to all this awesomeness, Zen will likely have on-die graphics that will absolutely mop the floor with Intel. Then you're looking at probably 8 - 12 cores at an actual reasonable price.
Yes this is all theoretical, but I'm quite excited, and you should be too.
It is because the reality of "Moore's Observation" end is sinking in. It ended back in mid-2000s.
The laws of physics and diminishing returns has creep up. It is no longer economically viable to keep shrinking ICs and transistors.
Intel and few other surviving semiconductor companies are fighting a massive uphill battle to stay relevant.
>AMD is totally irrelevant at this point for either Nvidia
On the CPU-side, Zen will probably range somewhere from Sandy to Skymeme in general IPC.
On the iGPU, things are gonna look interesting with on-die HBM, but we'll see how this one works out.
Someone on reddit provided a fairly detailed essay showing that Zen would be at least as fast as Skylake, assuming core count is equal, and this was the most conservative guess. Zen is likely to have a higher IPC than Skylake, and will almost certainly have higher core count.
is larger scale operations, like datacenters, operations per watt is much much more valuable than speed.
You can always get more CPUs.
It's the cost in power that counts.
Intel is attacking the datacenters.
AMD finished and bankrupt.
I saw that post but as ever /g/ forgets IPC isn't a static thing, the type of work being done is very important (see: why you have to be careful running IBT on haswell (and newer I guess) chips as its thermally destructive).
Feature size has literally nothing to do with IPC.
Summit Ridge is only an 8 core part.
Summit Ridge does not have an IGP
Only the Raven Ridge APUs have integrated graphics
Stop trying to spread hype when you have literally no idea what you're talking about in the slightest
Reddit is full of fucking retards, and you should feel ashamed of yourself for being this ignorant.
You cannot boil down a processor to any one number indicating performance. You have to look at every aspect of the processor and compare individual performance metrics.
In some areas Steamroller is only half as fast as Sandy Bridge, even with a slightly clock speed advantage. Excavator is only a small uplift over Steamroller so unless Zen radically improves on all aspects of their FPU as well AMD isn't going to release any CPU capable of competing with Broadwell or Skylake per clock.
Just a few days about Lisa Su commented on AMD's upcoming line of Opterons and she stated that they would address 80% of the market. In pleb terms that means they would be competitive in 80% of common server workloads, and they've get their shit slapped in 20%. I guarantee you that they'll still be far behind in any heavy FPU ops.
Zen isn't going to be an out of the park home run. Little cretins on the internet who spread totally unfounded hype about upcoming AMD products do more harm to the company than their lackluster marketing ever could.
I find it hard to believe that effective IPC on x86 can get much better than what it currently is or that Zen can even quite match Skylake.
you can throw more parallel execution units at a pipeline, but you have to work much harder to keep them filled, and every branch mispredict just throws away that much more work.
Harvard architecture's on its last legs, and you need to start do crazy scheduling stuff like Itanium (Poulson has 12-wide dispatch IIRC) or simultaneous multithreading (which ends up thrashing caches anyway) if you want to get much more done per clock and not get killed by pipeline flushes.
Adorable. Having double the data paths for handling native 256bit and joined 512bit ops does not mean that it would double performance. That is not how throughput works.
The Bulldozer and Piledriver modules have a 4 wide FPU, 2 128bit FMACs and 2 MMX units. This was revised down to one MMX in Steamroller. All we know of the Zen core's FPU is that its 4 pipelines. That data alone is meaningless, and AMD themselves stating they'd still be behind in 20% of enterprise workloads is telling enough.
what kind of user workloads are even FPU heavy at this point and aren't offloadable to a GPU?
I'm not a shooper or whatever, but it's very hard for me to get excited about SSE/AVX/whatever shit nowadays.
>AMD themselves stating they'd still be behind in 20% of enterprise workloads is telling enough.
That tells us nothing about the desktop parts though - enterprise workloads are far more varied and some require very specialised hardware. Just because AMD isn't aiming to saturate the entire enterprise market does not inherently mean zen can't be a success.
>what kind of user workloads are even FPU heavy at this point and aren't offloadable to a GPU?
Tons. Most people don't understand that integer unit and floating point units are strictly literal. Your integer units can process vector floating points, and your floating point units can process certain integer ops. There are tons of extensions that make use of this, and they're common.
GPUs only handle massively parallel ops, they don't process larger more serial workloads. Though they are made up of floating point processors, they're nothing like the FPUs in your CPU cores. Sharing a namesake does not imply they are similar.
First of all, Zen doesn't use modules. Secondly you've completely failed to grasp an incredible simple concept. Congrats.
No longer having a FlexFPU would mean that each core individually has greater FPU throughput. That says absolutely nothing about the throughput of that FPU itself. Processing 256bit ops natively vs 128bit ops does not mean you've doubled performance. Doubling width does not double performance. Scheduling, execution, and instruction retire are not anywhere near that simple.
Protip: Enterprise chips and consumer chips use the exact same core architecture.
If they lag behind in a performance metric in enterprise, they'll lag behind there on the consumer market as well, and you'll see that on release day when everyone publishes their reviews.
I'm serious though.
Let's say that I don't do any sort of professional or even amateur graphics work (even transcoding or whatever).
what specifically is going to use even 10% of my x87/MMX/SSE/AVX capacity for more than like half a second?
I don't even feel like gaming even boils down to much numerical computation under the covers nowadays.
maybe something like XCOM or Civ, but I'd have to see a benchmark of other stuff being FPU bottlenecked before I believed it.
28nm -> 14nm scaling is only about a 50% reduction thanks to lying yellow jews at TSMC/Samsung.
a 230mm^2 GPU is most likely going to be the shrunk Hawaii/Grenada equivalent.
If you count out content creation, rendering, media encoding/transcoding, and any scientific workload then what exactly are you expecting to use a high performance CPU for? Web browsing? FPU performance can make a big difference in gaming and a few emulators, game engines aren't near sterile integer synthetics like apache bench.
Its a 70%~ area scaling advantage.
>talking about MMX
>not talking about SSE
>There is only one instance of this ever happening.
>In a clickbait online mag.
Actually there were a few reports on OCN forums, but they probably went way out of spec with those CPUs. Apparently you can bend it back using a heatgun, also.
So you have to purposefully try to damage it? My understanding is that the material is thinner, but the same spec. Scythe had an outdated cooler design that didn't have enough support for the new spec. How is this Intel's fault? The whole thing reeks of a manufactured controversy, to shit on Intel, just because.
> If you count out content creation, rendering, media encoding/transcoding, and any scientific workload then what exactly are you expecting to use a high performance CPU for?
Compilation, text compression, databases, content caching, software routing, shitposting, ...
What is this measuring exactly that a >3x improvement is claimed where AMD/Nvidia are only claiming 2x density bumps for Polaris/Pascal?
Does effective gate area really go up that much for more power efficient ones compared to smallest case leaky ones that presumably won't be used very frequently?
A FPU from an ARM core is the test silicon used for those figures.
GPUs are much more complex, and you wouldn't see the same efficiency gains from a larger IC. Different structures on die have different static draws, some are inherently much more dense, others use such little power that you could actually see a 10X improvement in perf/watt in them from a single node shrink. So you shouldn't ever try to extrapolate metrics from test silicon and apply them to a real world product. Thats basically showing you the best possible scenario for the given process, not representative of a complete chip.
wait so are they literally admitting that 4.0GHZ is pretty much the physical limits of computing?
every previous gen would just be factory underclocked/frequency/powered which resulted in less speed
but if mutli processing was important everyone woudl be using xeons
devils canyon was 4.0 stock
everythign under it was underclocked and thus lower TPD
this was their first 14nm chipset and it fucking sucks ass on so many fronts and now they're trying to gaslight us to start expecting not getting faster speeds
inb4 everyone has 20 core xeons at 1.8ghz and think its great
The OP is pointless clickbait. Some guy with intel was speaking about tiny chips for embedded devices and "IOT" garbage. He said that they'd have to focus on transistor structures which favor lower power instead of performance.
He wasn't saying anything about desktop CPUs giving up clock speed or becoming slower. Media outlets are desperate for incoming since adblockers are (rightfully) robbing them blind. The clickbait is getting worse and worse and they scramble for ideas to generate web traffic.
That being said, 4ghz is no magical barrier. Neither is 5ghz for that matter. If a fab invested the money they could develop a process to deliver you a stock 5ghz chip with some OC headroom to spare. The caveat is that it would undoubtedly suck down a lot of voltage, and as such static leakage would be high, and the chip would run incredibly hot. Despite logic density shooting through the roof we still are making solid gains on clocks per watt every year up to a point. Intel and the rest of the gang are focused primarily on lower power because thats where the market is headed. They want one process which can address mainstream computing, then scale down to mobile devices with as little variation as possible to reduce costs. Process nodes are an investment of several billion dollars each so getting the most out of every dollar spent is crucial.
Devil's Canyon is just a Haswell refresh. Its still 22nm Trigate.
Broadwell CoreM is intel's first line of 14nm parts.
Yeah, the irony with ARM is that more mobile devices mean that we need more servers. The margins on manufacturing ARM chips are very low meanwhile intel has massive margins on servers.
no matter either way
they switched to 14nm and yet still use more TPD than the haswell refreshes and not even as fast in terms of clock speed
and on top of that they were freezing under load
and the actual raw stats on cpu-world show the 4790 UNCLOCKED is fractions less fast in single threaded and the K version is still the fastest period
A CPU like that would go on the Opteron multi-socket line, not the consumer FX series, you retard.
In the consumer line consider yourself lucky if you get an 8-core chip, and very lucky if you get anything more than that.
>INTEL BANKRUPT & FINISHED. ARM WON
ARM has pretty beat Intel at its own game. During the 90's, Intel won the desktop/workstation war by producing, cheap, good enough CPUs compared to their higher cost counterparts. Now that they're used to selling high profit margin processors, they don't want to go back to dumping R'n'D into making shitchips for shitprices.
AMD can go back to being screwed to hell and back by its shareholders.
This is about extrapolating perf/watt from process, totally independent from ISA.
TSMC, Samsung, and GloFo will all do the exact same thing. They'll develop ultra low voltage ET-SOI, TrenchFETs, III-Vs, or some other venue for pursuing ultimate low power. GloFo's(IBM's) 22FDX can scale down to .4v operation leveraging body biasing, the primary customer for these chips are going to be customers designing IOT stuff since not even mobile phones will target such tiny power envelopes.
there are already ultimate low power, it was called the T-chips
now if they're going to make low powered CPUs that are upper 3Gs without OC'ing then thats fine
but otherwise it sounds like they're just trying to get everyone to get used to using xeons....and the pre-HT ones which just had 2 or 4 cores
Stop formatting your posts like a redditor.
T and S SKUs from intel are not any different, they're just hand picked binnings. You're completely and willfully ignoring the subject at hand. Intel is talking about tiny incredibly low power chips for devices like Google Glass, embedded sensors, things of that nature. None of this pertains to desktop or enterprise chips in the least.
SOI stuff sounds great in theory, but does anybody honestly use it outside RF stuff?
It seems like the extra wafer processing steps still add more cost than what almost anybody is willing to pay for the benefits.
SOI has fewer mask layers than bulk, and they're significantly less complex than FinFET.
The increased wafer cost is negated.
Apparently GloFo has a few customers already for some small chips, but nothing noteworthy.
I've heard it claimed that the advantage of SOI is that it can get FinFET-equivalent leakage on 28nm without requiring the multiple patterning that 20nm and smaller do.
So decent power but nothing noteworthy on density.
But given that 20nm planar was to all appearances an unaccepted platform due to lackluster leakage, I'm curious why 20/28nm SOI didn't take off 2 or 3 years ago.