I didn't expect it to end so soon.
The "supercomputer in a smartphone" dream is over.
We hit the thermal limits of silicon. When scientists figure out how to mass produce graphene chips or some other miraculous material, the whole exponential curve of improvement will start up again.
at those scales you run into quantum shit that you can't work around. For instance, the uncertainty principle and quantum tunneling mean that some electrons will flow straight through that wall of atoms. Literally just appear on the other side. We're just about at the hard limit for as small as it's physically possible to make an electric circuit. You might as well tell them to step it up and invent warp drive.
theres this thing called 'electrons have never given a flying fuck about you & will jump ship with the slightest provocation'
the smaller the pipe, the leakier. eventually youre just etching in premade short circuits
>Why not just make processors bigger?
because electricity obeys the cosmic speed limit. Light (and therefore electricity) travels just under one foot in a nanosecond. So at four gigahertz, a signal can travel at most three inches before the next processor cycle starts. You also need some slack time for the signals to arrive and for the chip to assume a consistent state for that cycle. Make a chip bigger and you'll decrease the maximum clock, as you'll have to ensure that signals can travel from one part to the other in time. (for present chip sizes, the absolute speed limit is about 8-10 GHz, and no crazy LN2 cooling or overvolting or anything will push you beyond it.)
Also, remember Fermi? The GTX 480? With it's 1.3% yields and 300-watt power dissipation? This is what happens when you make chips bigger and bigger. They already dissipate more heat per unit of area than the nozzle of a Saturn V's rocket engines have to. The greater leakage current at very very small process sizes only makes this worse.
>Supercomputer in a smartphone
Compared to 10 years ago we've reached that, what makes supercomputer a supercomputer is how it performs next to a regular consumer computer. So no, in 10 years we'll never have phones that are 10 fold faster than the computers in 10 years.
What if we just make ourselves bigger? If we increase our size with bio engineering, then Moore's Law continues to go in the other exponential direction even if the tech can't get any smaller.
So why not make them more efficient at what they do instead of giving them bigger muscles? Like a work smarter, not harder kind of thing? Does that even make sense in this context?
Doesn't that mean we could double the size of the chip and still have a maximum clock of 4-5ghz. Sure that wouldn't be an achievable speed, but 3-3.6ghz could be?
In regards to cooling and power use, sure it's not practical but it's possible.
That's what you've been getting the past five or so years. Ever since Sandy Bridge Intel couldn't really push the performance higher by more than 2-3%, clock for clock, and that clock couldn't be pushed higher than it already was. This was despite moving to smaller processes more than once, and refining them. So yes, they most certainly are improving efficiency. But it isn't going to improve much faster than that anymore. It's certainly not going to double performance every 1-2 years like Moore's Law went.
>less transistors per dollar in 2015
>we have literally gone backward
>that feel when you have just witnessed the peak of human technological advancement
>we are now heading downhill
I never thought the consumerization of technology could get this bad...
They make chips of that size already, every Titan has one in it. Yields still aren't good, the reason that GF ?70 cards have that "like the top model, but with a few things disabled" is because the majority of dies made are defective. They design them to be able to laser off the not-working parts.
and they're thermally limited at clock speeds far lower than that. You could probably run one at 3.something GHz. You'd have to find a way to keep its temps under control while it dissipates at least 500 watts though.
Because parallelism is very hard and there are lots of real world scenarios where it's impossible, where calculation Y depends on the results of calculation X and can't start until X is completer, no matter how many cores you have sitting idle. GPUs are massively parallel because the particular thing they do happens to be a field that is "embarassingly parallel". You can also do it for servers, since you can split, for instance, the SSL load of many web connections over many cores fairly easily.
But for the stuff most people do on personal computers, definitely including the CPU side of gaming, there just isn't a good way to extract parallelism. This is why no gamers put 20-core Xeons in their rigs.
No, you cuck, the price of higher performing CPU's got more expensive than previous high preforming CPU's. Notice how AMD haven't done anything for a few years, that's definitely a big factor
CPUs have been fast enough for everything normies do with them since the Core 2 Duo came out, way back on 65nm in 2007. Everything since then has been us riding on the gravy train as it coasts to a halt.
No, the price of transistors got HIGHER. Meaning, yes, maybe the faster CPU now is faster, but you could have bout a 2 CPU faster computer last year than the price of the latest intel extreme.
For every dollar you spend today, you could have gotten faster parts a year ago.
Fully depleted and insulated channels lock electrons in place. A FinFET with a fully depleted channel is a quantum well FET, and aside from leakage due to manufacturing variability there is no issue with electron tunneling.
>hey already dissipate more heat per unit of area than the nozzle of a Saturn V's rocket engines have to.
You're taking this from a very old and hilariously inaccurate intel slide. It isn't true. Chips today are dealing with less leakage current than ever, that is the whole point of moving away from planar gates. FinFET, double gate, TriGate, GAA, III-Vs, trench FETs, etc, all of them exist so that we can reduce leakage as gate length and area scaling decrease.
You should retract that gratitude.
The end of Moores law might be here or it might not. It doesn't matter. If progress on transistors is over then we will increase computational power per dollar on another paradigm.
Also, we don't even need more computation. The current rank 1 supercomputer has 9/10'ths the estimated computational capacity of the human brain. An AGI is the last invention we need to make. The hardware is here, it's up to the software now.
This faggot needs to work on his rhetoric. His speech pattern is annoying as fuck.
>The current rank 1 supercomputer has 9/10'ths the estimated computational capacity of the human brain
Yes, because a fucking 24 megawatt computer is comparable to the brain. The hardware is totally there.
its really crazy just thinking about it. knowing that at this point, there's just no way for processors to get any better/faster except for a few modifications/optimizations maybe. unless theres a major breakthrough such as quantum computers, we will be stuck at what we have today. And I cant stress this enough. WE ARE STUCK AND WONT GET BETTER PROCESSORS ANYTIME SOON (and gpus? I assume they are built on the same principles).
they'll be doing further improvements to SSDs but that'll come to an hold too real soon too i would guess.