Why is CPU development seemingly so slow? I've had an i7-2600 in my machine for 5 years now, and every time I think about upgrading I'm told that there's still nothing out there that's worth swapping my motherboard and RAM for. Meanwhile, a 5 year difference in GPU releases presents a massive performance difference.
What's the deal? I do a lot of simulation, data grinding, and video rendering on my PC and I'm looking for a significant performance boost. Is my 2600 really not holding me back? Is there anything currently worth upgrading to / waiting for this year?
Intel has become complacent. AMD needs another Sledgehammer to push them forward.
we are at the tail end of x86's improvement curve because the laws of physics are catching up. They can't make it any smaller or faster because they are running out of ways to do it.
This guy knows what he's talking about
We need a drastically new architecture to really have any performance increase because we've stretched x86 as far as it can go. Also I could be wrong but I did read that we can barely shrink it anymore because the electricity jumps channel to channel as they are so close together
You sound like a fucking marketing dev. Intel is sitting on oodles of patents and designs that will keep them afloat no matter what AMD throws at them.
Moore's Law is only "dead" if you read articles from moron editors. What you're seeing is an issue of resting on laurels, talent poaching and sabotage on one side and the other not doing the same quite as well as the other.
That's literally the most retarded thing I have heard yet. Give me one good reason why they need to die. Plus it wouldnt happen either way cause competition is always needed otherwise Intel can just do jackshit and get away with it.
>Moore's Law no longer holds
>increases in processing speed cease
>programmers have to O P T I M I Z E again
>a new Golden Age
Reminder that shit like this can be done on an 8088 CGA
Yea cause amd 's been sucking dick. If zen doesn't at least hold a bit of a fight they are fucked but with all the shit I've been hearing it should be a good year for amd. It's mostly rumors though so take it with a grain of salt.
Look at this shit.
Almost half of it is a useless shitty embedded GPU.
It's not even an option to buy the cpu without it.
It's basically like this:
AMD is pure and utter shit.
AMD is Intel's only competitor.
Intel utilizes the fact that AMD is pure and utter cow dung.
Intel slows down development and sells marginal performance increases for over 10 times their value, slowly.
Intel is helped by the fact that it is based in Shitsrael, the nation of profiteering culture, and rip-off culture.
When people tell you to wait for something, dismiss all of it unless the rumor is related to graphene.
Graphene is the real CPU advancement we are all waiting for. CPU core numbers are just beating around the bush and bullshitting into a cesspool.
>Intel slows down development and sells marginal performance increases for over 10 times their value, slowly.
Money is in mobile (i.e. laptops) and not really in desktops any more, so for the last 5 years Intel has been working on power efficiency.
The performance increase trends definitely have decreased with each generation.
But they are significant enough to warrant an upgrade.
The thing that did change over the years is that more people became poor, so that would explain why others tell you not to upgrade.
But seriously though, the latest graphical cards require the latest cpu's because otherwise they would be bottlenecked.
The difference between your cpu and the latest one would probably be many, many frames for vidya.
The format of devices is irrelevant to porting from silicone to graphene anon. I know you want a jerkfest about device format, but that other thread has pretty much crapped all over that argument.
I buy AMD so Intel doesn't get too complacent. I don't understand why people circlejerk so much over Intel vs. AMD. People need to be buying both (or more, if there were any other competitors) so they don't get lazy and/or start monopolizing prices...
I think you mean doubles.
Also, making the same thing smaller isn't really optimizing. Making the architecture more efficient so you don't need as many transistors to do the same thing is optimization.
d0 or c0? I got my hands on a d0 a while back and found a nice and easy 4.2ghz stable (58% OC). At the time, people seemed to have trouble with their c0s below 4ghz, but found greater success above. These days you'd be hard pressed to find an ivy or haswell chip that can reach 50% safely, let alone within a reasonable temp range.
Nehalem was truly a greater time for cpus.
Yes, that's what I meant, no idea where my brain was when I wrote that.
Depends on the type of optimization, finding a way of fitting more things into a confined space can be considered as optimization.
because cpu manufacturers are lazy
top cpu stats 2010:
8 cores 4.0ghz
2016: 8 cores 4.5ghz
that means you got like maybe a 10% performance increase in the past like 8 years of cpu development
Other than more power efficiency, newer instructions, more pci lanes, more io speed, ddr4, fixed bugs and better nsa backdoors in the microcode, yeah you have no reason to upgrade.
>more pci lanes
While improvement is nice, the average user won't notice the difference between ddr4 and ddr2vl 800mhz.