[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

Why do all the machine learning algorithms plateau out instead

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 20
Thread images: 2

File: superintelligence.jpg (111KB, 742x576px) Image search: [Google]
superintelligence.jpg
111KB, 742x576px
Why do all the machine learning algorithms plateau out instead of the program indefinitely continuing to improve? All the singularity / superintelligent AI scenarios seem predicated on the idea of programs programming themselves to be increasingly better at everything until they turn into magic god entities or whatever, but every single optimization strategy I'm aware of just hits a ceiling pretty quickly.
>>
>>9168789

Aren't singularities and superintelligences going to be artificial general intelligences?

As far as I know, machine learning is artificial narrow intelligence.
>>
>>9168789
because they dont have enough power to alter themselves, I guess?
>>
>>9168789
Congrats, you've realised that singularity memes are just sci-fi wank.
>>
>>9168794
I personally don't think it even makes sense to call something "general intelligence." That sounds like a cop-out where it's still a bunch of specific things going on but you've decided to stop thinking about what those things are and just lump them all together.
>>
>>9168798
It's not computing power that makes my programs stop improving though, it's that they hit a plateau and stop improving. They'd do the same thing whether they were running on a laptop or a supercomputer. You can increase the number of nodes / connections on your network, but that doesn't give you gains indefinitely. There's an optimal number beyond which it actually starts giving you worse performance because of over-training where it basically becomes autistic and learns the specific training data too well to where its answers will be garbage for anything other than that specific training data.
>>
>>9168811
obviously i did not mean computing power you dumbass
>>
>>9168817
What other kind of power is there in this contexst?
>>
>>9168820
the ability or capacity to do something or act in a particular way
>>
>>9168831
Is that quantifiable? It sounds like if it is what you'd be doing is just measuring how good it is at solving the class of problems you're trying to optimize it for, in which case that's a tautology (it isn't able to make indefinite gains because it isn't able to make indefinite gains).
>>
>>9168844
i feel like this is a case of "ask a stupid question..."
>>
>>9168789
if they knew how to reach singularity they would have done by now.
All ML is problem specific at this moment. More general solutions might yield more general algorithms.
>>
>>9168811
Could you make an automatically recursive unit(s) and train it on global software project forks mapping database(s) to result in an AI that is capable of program synthesis?
>>
>>9168874
he didnt ask a stupid question lmao you just had a stupid answer
>>
>>9168789
To add further to this: even if we could create something which is intelligent on our level, why do we assume that it will know how to self-improve any better than we do? What if we find out that intelligences don't actually like self improvement above a certain threshold?
>>
>>9168801
It won't be a singularity, that is not exponential, but at some point we are going to multiply the rate of discovery by 22 million when we upgrade to better brains
>>
>>9168789
Suppose that you had an algorithm that was maximally optimal with known optimal hyperparameters and infinite data. There is no guarantee that the the joint information shared between the predictor variables and target variables is 100%. In practice, we can never truly know this value, because prediction error is wrapped up in methodological limitations surrounding the data used, its amount, choices of algorithms, algorithm hyperparameters, etc.

That said, if an oracle gave us globally optimal values for all these things, there is no guarantee performance would become errorless. This is dependent on the joint information shared between the predictors and the target variable. It may simply be the case that knowing a person's shoe size, their mother's date of birth, and their eye color can't predict math ability. These variables share less than complete joint information.
>>
I work around the silicon valley area, in an academic position. Narrow AI and machine learning is the hippest fucking thing you can be working on right now (the current buzzword is "convolutional neural network"). Literally everyone has been using neural nets in research for years, and it's recently gotten a lot better, partially thanks to the access to improved computational power and large data sets to train off of.

Fucking NOBODY is working on a general AI. It's not useful, we don't know what "general intelligence" even means, and nobody will get paid to do it.

The kind of AI that will both make you money and be effective in the short term is selective or narrow AI. It's good at doing one singular task (maybe image recognition) and it can be easily integrated. There is no strong AI and basically nobody knows how you would create one. The billionaires that talk about these things aren't thinking realistically, and they're basically fear mongering about a non-existent problem.
>>
>>9168789
There are theoretical models of the kind you speak of. The problem is right now they require intractable computations, or oracles.
>>
File: phenotype.jpg (5KB, 230x219px) Image search: [Google]
phenotype.jpg
5KB, 230x219px
>>9169181
oh my sweet summer child....
Thread posts: 20
Thread images: 2


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.