When will this AI meme die for fucks sake /sci/. Its really triggering me everytime I see this nonsense especially if its by a reputable website
AI is a reality. Even the software that checks your captcha is AI. Will it kill you? Maybe, maybe not. But the possibility of it killing you gets bigger if you put this software in a drone copter with a gun and tell it to discern you from a road sign.
AI is shit m8. You only wish it was so capable and smart as people are trying to make of it. Even the smartest AI doesn't operate without human assistance. They follow basic linear behaviour trees which is completely predictable.
Even if AI turn on humans, so what ? You just pull its plug. They can't control guns or weapons or planes or tanks. It might try hacking some government database which you can avoid by pressing its power button.
tl dr : AI is only a 'thing' in sci-fi. In reality, AI will never be half as smart as the dumbest human alive.
It will never die because computers and software is something that a lot of people choose to stay ignorant about.
Also, the reason a lot of CS people also fall for the AI meme is because most of CS majors and graduates are the MOST ignorant when it comes to software.
They see the surface and from that they assume the rest. 'lol I bet my shitty chess AI I did for my homework totally will directly lead to general AI'.
If people would actually get into the math of this shit (like CS PhDs and Mathematicians in general do) they would understand why it is such a far fetched dream, so much that we now literally lump computers together to bruteforce something that would resemble it.
I'm not even kidding, there is a british research project (I'm almost sure it is british) where they literally just pasted together 4000 computers and went from there. Of course, still haven't reached anything significant. Maybe 5000 computers will do, right?
Are you literally retarded? You are using AI in the correct context, while these magazines use AI in the context of le super intelligent self aware machine that will take over mankind.
>show me a decent plane that can fly
>show me a decent thing that can print
>show me a decent pen that can write
>show me a decent machine that can send text across the globe
>show me a decent weapon that can shoot more than 20ft accurately
Stop being such a faggot, i was implying that in the future technology will get alot better, you are the same as people before the plane who said man would never fly.
That's the hallmark of an armchair pop-AI scientist. Anyone who says something like "strong AI must write its own code" should be instantly disregarded as not knowing a fucking thing about AI.
>computer power doubles every 18 months
But it's not likely to continue to do so. Moore's law is hitting walls, a plateau is on the horizon. FinFET might get us a little further but not much.
>Computer hardware already below the predicted growth (Moore's law n shit)
>Our algorithms do not decrease in growth
>Our pursuit to find alternative algorithms that are faster always lead us to sacrifice something in exchange to the lower growth
Every action requires the correct solving of a Image Captcha. Need to eat? Solve a captcha. Didn't do it correctly? No food for you. Need to go to toilet? Solve image captcha. Didn't do it correctly? No poopy for you. Need to fap? Etc.
>An AI is something which is
$ gcc --version
gcc (GCC) 5.3.0
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
>can make its own decisions
$ gcc prog.c -x none
>write its own code.
gcc is self-hosted
Am I doing it right?
Anyone else scared of a future where you're brain can be scanned (perhaps without your permission or knowledge) and run in an emulator as a computer AI. Because it's a computer program it doesn't have any basic human rights and can be mentally abused or tortured. Or what if we create a Strong AI from scratch and it gets mentally abused.
Even if we establish a set of guidelines for dealing with AI or even give them human rights there's a point where an AI is too dumb to be considered giving it any rights. Anyone could copy/scan your brain, upload it into a computer, and virtually lobotomize it till it dumb enough to legally not have any rights or protection.
tl;dr Legal and human rights ramifications of AI scare me the most.
So much this. I've been meaning to get into the ethics of AI for sometime now. Not sure if we can do it, well not yet anyways, but what if we do create a self aware AI? Everyone keeps on a bout how we should shackle It like a dog to do our every bidding and doesn't step out of line. But would it be ethical to do so?
Do you guys realize that you are popsciece as fuck? It's really hypocritical that you guys always say that you hate popsciece.
My ML threads always died with no reply but this shit is always bumping to no end and I'm mad. Let me spoonfeed you the reasons why these articles are dogshit you fucking mongoloids:
1. The best AI right now is matrix multiplication. YES, MATRIX FUCKING MULTIPLICATION. Do you think that's something amazing and breathtaking? Nope.
2. Every AI is the same as your statistical model. They perform what they are programmed to do with a certain precision/belief. That's it. It's nothing more than that, it does not create what it does not know. It does not detect what it was not trained for.
tl;dr, machine learning models are as dumb as your stat models.
3. You think those google captchas bot are strong and scary? You are stupid. God read their paper on how it works. They literally just stacked 22 matrix layers for that shit. Period. And do you think matrices can kill you?
DO YOU THINK A BUNCH OF MATRICES WILL POSSESS A VILE WILL AGAINST HUMAN? DO YOU THINK MATRICES HAVE SELF AWARENESS?
Answer that shit to me with a straightface and realize how stupid that is.
Don't think about what we are not capable of. You are overestimate the field too far.
All in all, what I'm trying to say is that AI is still really dumb. Go read some papers. Don't read popsciece craps and scared like a /x/ person.
Well I did say that I think it unlikely and all. It's just that these threads about AI keep popping up and it got me thinking about the ethics of it. The subject is prime for a nice philosophical discussion.
On a side note, the matrices they trained for tasks such as finding images for captchas, translate languages, etc are fixed throughout the course of application phase. That is, after training, the matrices or whatever models do not change anymore because there is no way for a model to self-develop in a right way yet. Since they are fixed after training, there is nothing scary. You can imagine that the AIs right now is the results of a bunch of simple functions, found during the training step.
When we have better meta systems to adjust unsupervised learning in the online phase so that they are as good as supervised learning, then that's when it's actually scary.
We are really really far from that.
Yeah well, when we reach the stage when models can actually learn without the control of human, it would be very scary.
I don't think we can even know the limits of what it can learn. We would be left behind for sure because machines have already surpassed people for heavy-calculation tasks.
It's actually bumping against the wall right now. Intel just pushed back its 10nm processor line another year, the first time they've ever broken the tick-tock cycle in decades. It's questionable whether we'll have 7nm processors by 2025 if this continues happening.
Also, the economic costs are starting to outpace the improvements. Every new cycle requires a fab that costs billions of dollars more than the generation before it.
For real though.
I'm studying physics. The first time I heard there is only one "big" analytical solution for the Schrödingerequation: the hydrogen atom. Everything beyond that is numerical.
I fucking couldn't believe it.
But that's my point: it's a P/NP thing. Equations get so freakin complicated the bigger the problem gets it seems unreal to even consider it calculating.
Now. Maybe this shit is not solvable with "usual equations" is there a concept beyond "le bruteforce this shit with numerical principles"? Is there a way to do stuff without equations? I think this would make things less "complicated" and some stuff even computable.
Or to get "le strong self aware super AI" we have to study the brain first. Like we need to now how thoughts are processed. Then we could use principles of bionics to get le AI.
That is the only way it could work.
We live in a world with physical laws. This shit is physical too.
We just don't know how this shit is processed, which seems reasonable considering that it's fucking efficient as shit. You need close to 0 energy to keep the brain running. That's evolution.
>b - b - but muh CS
CS is way to basic in comparison. The brain has a "plastical" structure, computers don't.
We have no close idea how this shit works
>You need close to 0 energy to keep the brain running.
That's not very true. It's one of the organ in our body that consume the most energy.
>The brain has a "plastical" structure, computers don't.
That's where software come into play.
>We have no close idea how this shit works
That is true.
I get your point: BUT WHAT IS DIFFERENCE
We are machines to.
Who said we do not think in "programmed structures"
DNA is a code, why shouldn't our thought process be a code?
This implies, that strong AI is possible once we have the resources and the knowledge of our brain. That's not pseudoscience.
>le brain magic
Where did I ever say that it was impossible for strong AI to exist? I was merely pointing out that Moore's Trend is on its last legs.
My apologies if I offended your religious sensibilities.
It consumes the most energy that's true but compare that to your computer. It's still less
Also, our software is mostly shit. Way to much layers to get to the hardware just so that humans can understand it. If the body has a code for example, it's probably nothing more as some basic stuff directly leading to the hardware
We always have to thing with evolution. There are I don't know how many fucking years of trial and error to get this thing running. And it hasn't stopped. It gets even more efficient.
Everybody knows the study were they have shown that our brain gets smaller since a given time. They think it's because the neural bridges get even closer, meaning it would be more efficient and would work even better.
This shit brings tears to my eyes
>It consumes the most energy that's true but compare that to your computer. It's still less
I don't deny that.
>Also, our software is mostly shit.
Agreed. Improving it will be a major goal in the 21th century.
I hope Strong AI isn't possible for only one reason: To see the Kurzweilites cry. Those annoying motherfuckers will never shut up about how Moore's Law is apparently woven into the universe and MUST continue.
In a neural network (which is not the only kind of AI system that uses genetic algorithms, but is a good example) usually has its basic architecture decided ahead of time by designers. Pictured is a typical feed-forward NN architecture.
So, a human being decides how these nodes are arranged, but an algorithm must decide how they influence each other. We might want a signal on input #1 to cause hidden layer #1 to trigger, but suppress HL #2 and have no effect on HL #3. In this example we might plausibly set this information manually, but in the real world a NN will have thousands of nodes. It's just too much work to do this, but we have many algorithms to do that work automatically.
Among them are genetic algorithms, in which a solution that partially works can be combined with another that partially works in a different way. These partial solutions can be put through many generations (again, automatically), and all of this work can be done without further human input as long as we have training data that can tell the system whether or not it's coming up with an acceptable answer.
Machines already took a bunch of jobs centuries ago
>something that was thought impossible became possible
>therefore whatever I want that is claimed to be impossible is surely possible
I'm sure there's a name for that logical fallacy
It's plateauing (maybe we should stop fitting unlimited exponential growths onto everything) and it's not very relevant, having quadrillions of petaflops won't make your hellworld.c any smarter.
People like you are the same assholes who read a story like "we found amino-acids somewhere in the galaxy" and end up spouting retarded shit like "guys, what if these ayyliens exterminate because we're so inferior?". So basically congratulations, you're about on the same level as Stephen Hawking the meme scientist.
Neural networks have proven themselves very valuable for identifying sounds and images.
But they require too much resources in my opinion. For instance, we need to feed those neural sets thousands of images of cars before it can identify a car in an image.
This is clearly not what happens in the human brain. Our brain gather small details from the car (its form, its color, its wheels...) and the environment around it (situated in a parking lot) before connecting those to the concept of car.
We're not gonna be able to make a strong AI out of neural networks only. We have to be more subtle than just relying on sheer computational power, otherwise we'll reach the limits of what hardware allow us to do very quickly and strong AI will never be a thing.
It's nice that you have an opinion, but keep in mind that running a neural net is quite efficient compared to training it, and that no AI researcher claims that neural nets as they are used in AI are equivalent to what goes on in the human brain.
>but keep in mind that running a neural net is quite efficient compared to training it
This efficiency is necessarily bounded. Neural networks aren't reflexive for instance.
We'll need more elaborated data structures if we want to do more than just identifying things.
I'm studying physics but this shit is one of the most interesting things on the planet for me.
What could be the reason that the sample size of humans is so freakin low compared to computers?
Why are humans so good at identifying patterns and joining the dots?
There has to be something going on we aren't aware of at this time
>Why are you arguing against neural nets.
The main reason I don't really like neural networks is because many pop sci illiterate faggots around me bring it up as the only way to achieve a strong AI.
What amazes me is that all the information telling the brain how to develop in order to be so clever can be contained inside a single DNA molecule.
Actually, knowing exactly what DNA does could provide us with the skeleton of a primitive brain. The only thing that would be left to do then would be training it, ie, feeding it with data much like we do with children.
>we need to feed those neural sets thousands of images of cars before it can identify a car in an image.
You know unsupervised learning algorithms for neural networks exist, right? Like the thing that Google made a while back that learned to identify faces/bodies/cats just from watching YouTube videos?
Because it's horribly inefficient and there are much better ways of doing it. Please read a book before you start spewing whatever kind of nonsense you think "sounds logical" and passing it off as fact.
>the model has 1 billion connections, the dataset has 10 million 200x200 pixel images downloaded from the Internet
>10 million 200x200 pixel images
I rest my case.
>We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores)
This makes me wanna puke. They should really focus on conceiving new algorithms instead of beating that dead horse that is neural network.
Yeah thought that.
That's some sci fi shit. Plugging brains into computers and feeding them information.
You could do anything: you could build scientists, soldiers endless possibilities.
But the question then is: what is the difference to us? There is none I think.
I'm just saying the dream or end goal of every field of science is a beautiful one that benefits all mankind. Strong AI seems like it would just be a nightmare IF it ever became real. Kinda makes you wonder why we do it at all.
My claims aren't ridiculous.
In fact, you're the one who stopped arguing with the "armchair" insult.
>Real science isn't about
No real science ever came from /sci/.
And on a subject such as artificial intelligence, I think all we can do is merely expose or views and share experience.
>Strong AI seems like it would just be a nightmare
In your contrived dystopic scenarios, maybe. You can come up with just as many scenarios where strong AI would be a godsend if it ever became real.
>YES, MATRIX FUCKING MULTIPLICATION
Artificial Neural Networks are matrix multiplication too, and they are a perfectly correct model for the human brain.
>They perform what they are programmed to do with a certain precision/belief.
Which you can say about humans as well.
>DO YOU THINK A BUNCH OF MATRICES WILL POSSESS A VILE WILL AGAINST HUMAN?
Do you think a bunch of neurons will possess a vile will against human?
What is there to argue with someone who isn't even aware of some of the most well-known research in the field but still feels like they know better than people who work with this stuff on what the best way forwards is. There's just too many retards like you on /sci/ to bother enumerating to each one the reasons why they are retarded. You don't pass the minimum barrier of entry for a meaningful discussion.
>they are a perfectly correct model for the human brain.
I don't think anyone says this.
There's also the fact that that guy is dead wrong about it being "just" matrix multiplication. It's successive matrix multiplications followed by certain non-linearities like tanh or sigmoid. The whole fucking point of neural networks is to estimate non-linear functions, it's what contrasts them with things like SVMs and perceptrons.
The amount of ignorance in these threads is just astonishing.
You're butthurt because you can't refute anything I said, so you just call me an idiot even though you seem to know even less than me on the matter.
Whatever. I shouldn't have expected less from 4chan.
It won't die. Traditional religions are dying out in the developed world and the stupids need something to replace it. Thus, God 2.0 (AGI) Heaven 2.0 (transhumanism / mind uploading) and Eschaton 2.0 (Singularity / exponents / STEM will solve all the world's problems.)
Once the political edgelord phase of modern millennial social rejects fades, I guarantee you we're going to see even more of this bullshit.
It has largely stagnated.
Hawkins' insight about temporal pooling will prove to be, in my opinion, one of the most important contributions to the field of AI. But they basically just can't figure out how to actually implement temporal pooling in the HTM model.
I love the idea behind HTM. A general purpose temporal-spatial dynamics modeler. I just think their model is far too simplistic. Cortex does not act alone. If someone can implement the entire Thalamo-cortical loop, they will probably be the closest to true AI.
> For instance, we need to feed those neural sets thousands of images of cars before it can identify a car in an image. This is clearly not what happens in the human brain.
What's so clear about that? Humans take ~12 months of watching and listening to even start talking.
Source: Yoshua Bengio
get rekt'd faggot
>Children learn to see with essentially no labeled data, in their first two years of life, even before language really kicks in. Children see much less natural language in their childhood than the amount of text that we currently need to train the best speech recognizers and machine translation systems. By orders of magnitude. Why? Humans seem to be able to better exploit the little data they get, and I believe that it is because they build an internal model of the world around us that captures its causal factors. This allows us to predict what would happen under hypothetical conditions, even though these conditions are completely different from anything we have experienced. I have never actually lived a fatal car accident (by definition), but I can sufficiently well simulate it in my mind (and foresee the consequences) so that I automatically plan to avoid it. So we have many more things to discover on the road ahead of us!
Nothing in there supports your assertion, thanks.
>get rekt faggot
Grow up, manchild.
>to learn to how to form basic sentences
Am I being baited?
so much intellectual buttmad right now
its ok to be wrong on the internet you know