A.I. General Thread
Why are we still so far away from sentient machines?
Are there any neural network types that accurately simulate neurons as they would function in a brain, rather than in specific organized patterns?
Not possible within limits of silicon and thanks to modern laws, patent wars, market monopoly our advancement will only slow down. I say 50 years for simulating first low neural process animal like ant
If Apple or Google updated their devices to collectively process this A.I., would this be efficient/enough to process it fast enough?
Could something like that be done and would it even have any practical use?
Sorry for all the questions. This stuff is interesting to me.
Not enough processing power. Human brains contain ~200 billion neurons and hundreds of trillions synapses. This shit is a fucking nightmare to simulate in real time.
"One neuron may make as many as tens of thousands of synaptic contacts with other neurons, saidStephen Smith, PhD, professor ofmolecular and cellular physiologyand senior author of a paper describing the study, published Nov. 18 inNeuron."
Even simulating a single neuron is extremely painful if you wanted to simulate it down to the atomic level.
Actually I think the needed processing power probably exists but it isn't practical. We actually have a good grasp of how neurons interact with each other via chemical and electrical reactions.
Just to simulate a low complexity model of a human brain you'd need like 50 billion cherry trail atom processors which would cost around $1 trillion dollars alone. I'm not sure if GPUs can even be used here because of how accurate you have to simulate the electrical and chemical reactions. This is just a guess though, you might need more processing power than this even for a low complexity model.
All that expense to build a self-aware AI that could end up shitposting on 4chan all day long.
you are assuming we have to model the human brain down to molecules and neurson. this doest need to be the case and is a huge assumption on your side.
the nice counter argument against this is the maybe ~50 year old argument that says to make a machine fly you don't model the feathers and behavior of a bird. your rather use it as inspiration (wings) and you figure out aerodynamics. then you create planes which flies even better than birds are more scaleable and more optimized to fit the physical laws.
evolution (that made birds) is just one way for optimizing such things and evolution often makes more bloat than you need.
>All that expense to build a self-aware AI that could end up shitposting on 4chan all day long.
>tfw this is a likely scenario
We don't have a choice. To build an AI worth a shit, you have to go balls out and simulate as many complex functions as the real thing. Maybe we won't need to simulate neurons down to the atomic scale but we will have to simulate things like glucose intake, neuron degeneration, and the effects of drugs on neurons/synapses.
Yeah simulating even a single neuron is excruciatingly painful.
I'd imagine that to its creator, it'd be just as disappointing as your child growing up to do the same.
but you have nothing to back up this claim?
neurons in the brain for what neuroscience knows today is basically used for learning and generalization (mathematically minimizing an error function), this is what they do when making deep neural networks today. but the danger is to get stuck up on this idea. since there is many ways (algorithms) to do this.
so many version of flying machines with flapping wings that never worked. it could easily be a dead road. since what you want is not a "bird" /"brain" but you want intelligence and model a "thought processor"
The processing power required to simulate hundreds of billions of neurons is batfuck insane. Maybe we do have that kind of processing power but putting it to use would be very impractical.
Maybe it will be somewhat practical when we reach 7nm x86 processors.
>simulate the effects of drugs on the brain
>spend an extra hundred billion just so we can get the AI high out of its mind
He's talking about actually simulating neurons and their supporting systems in a way that actually simulates them. Of course efficiencies and differences would have to take place because they are not biological. No one even knows how to simulate a single neuron like this yet(or, if a single neuron is even possible). The complexity arises when you actually look at a small bundle of neurons. You probably could easily recreate a lot of stuff, such as ion channels with some fizzbuzz programming, but it's very complex to even figure out the exact connectedness of these systems. Bayesian statistics can't totally do this, just like a 2d geometry model of your kitchen table can't tell you the thickness of the table.
>but you have nothing to back up this claim?
Well no but what I'm saying is just simulating neuron interacting with each other might not be enough.
To understand how consciousness works we have to make accurate simulations of a human brain under starvation, being flooded with different neurotransmitters and drugs, and all other sorts of scenarios.
Before we can make self aware AIs I think doing the above will become unavoidable. We don't even know how consciousness works right now.
It's complicated making A.I. It's even more complicated making A.I. right (and right as in - not destroy the entire planet right) Read this book - it'll knock your fuckin socks off - it sure did mine.
well to answer this question. i'm pretty sure there is nobody in the world working on creating the brain from molecule and chemistry level. so yea this will never happen.
the AI field is all about what i posted about. so it was just to bring yall down to earth.
>A.I. General Thread
>Why are we still so far away from sentient machines?
Because we are also far away from intelligent humans... Lol, no really... I mean, come on, we don't understand consciousness, and cannot make computers that can "understand" their environment... Only build machines that takes one particular input and gives a particular output, and that's light years of the concept of consciousness... I think that for a machine to actually have consciousness, we first might have to rethink the entire way machines operate and are build
Because we don't know how "sentience" works and the only way to find out is to simulate a human brain with billions of neurons all connected by trillions of synapses.
lel good luck with simulating that senpai
No one explicitly said down to the molecular or chemical level.
This isn't really a mathematical problem(yet). It has more to do with our understanding of the brain. Bayesian statistics is great, and you can even do rough models of neurotransmitters with it. The problem is again, that the models can't account for everything, especially when you don't know what to do. Just as a 2d model of your table can't tell you anything that isn't in the model, neither can these neural models. Just like in programming, or predicate logic, you don't need all the information. It just has to be deduced into a good and usable form.
the article you are referring to is not using Bayesian statistics. But neural networks. its two different things.
some people here talked about down to molecule level in this thread
all these arguments about sentience is really philosophical questions. like the "Chinese room" etc. but you can also argue that our brain suffers from the same problems. like you cant prove that you are sentient for me.
there is a talk with Sam Harris (neuroscienctist) about how we maybe don't have free will etc. so then our brain becomes just a function with input and output or statemachine.
I think the key to intelligence is the ability to "learn" and "understand".
The issue with learning is that it's not enough to just assign 3.14 to a variable called pi, because the computer would know that pi is 3.14, but it wouldn't really understand what that means.
If a human learns a lot of stuff, he's able to make connections, but to a machine everything is an individual fact without any relation.
If you look into the semantic web, it is about creating those connections and relations, and the semantics, so maybe it will help in making AI. But I dunno
This isn't really on topic, but I think that free will cannot completely exist (at least in the average person). Too many things are implanted in our minds that we rarely have truly "original" thoughts.
Sure, you can tell a funny joke. But where did it come from? A template for "what's funny" must have been used.
Chill brah, free will is the ability to make a personal choice. That's all it is, that's all it's ever been and will be.
The notion that free will does not exist is absurd since you have just made a choice to question the legitimacy of "free will" in people. You have literally just made a personal choice thus proving you have free will.
Now whether synthetic beings can have free will is a whole another problem.
You're literally making a choice to take interest in something, so yeah. Nobody put a rifle to your head and told you to be curious at a specific time and place, you made that choice. This is all your fault.
That's free will, yes, making choices. You decide to get up in the morning, to take a shit, brush your teeth. You decide to go to work, buy your girlfriend expensive gifts and browse 4chan. You can choose to tell a funny joke.
But where did all the choices come from? How do you decide what to do?
"I'm hungry so I'm going to eat" is acting on impulse.
"I don't want my girlfriend to leave me so I'll buy her gifts" is acting on fear / emotion /impulse.
"I'll tell people this funny joke I just made up" is 1. Making up a joke (where did it come from?) and 2. Having the desire and excitement to tell it to other people, which is again an emotion
I get it now. This is why U.S. was created, because it was meant to be culturally enriched by the Muslims and other immigrants whose country was made bad on purpose by the will of greater force. It was made bad, and its not their fault at all, just like it isn't the founding father's fault to make a free country to advocate civil liberties and such. It's all for the greater good goys, and of course, questioning is merely another not a free will action, so do what the greater force tells you.
Aka, if you feel like raping or murdering somebody, just do it because the greater forces wants you to do it. But of course, its not your fault, just remember goys, just accept everything and don't question because its pointless goys, ok.
Simulating anything bigger than just a few atoms with current computing is for all intents and purposes impossible.
People in my lab rent supercomputer time for weeks just to simulate how a peptide 20 atoms long interacts with <100 H2O molecules for several nanoseconds of total time.
To 'simulate' a human brain on the atomic scale would be so many orders of magnitude more computationally expensive that it isn't even worth thinking about. You'd probably have to turn the whole earth into one massive computer. That probably wouldn't even be enough.
Cause and effect is not what you are describing.
If I punch you in the face, you're not going to say "oh well it was meant to happen". No... You punch me back, because my punch caused you to do that. And if you ask "why did you punch me", it's because you're stupid.
I have tried to explain the same concept to people, some get it, some do not. Some people have stronger mind-firewalls than others and their mind might only play in their current state of "know for sure" reality. It is harder to implant new thoughts in those as they don't seem to live life with a mindset of parallel coexisting ways to explain reality. Puzzle-pieces that will not fit their puzzle right away are just discarded instead of put to the side so that It might complete a different puzzle later.
This thread is wonderful.
>Why are we still so far away from sentient machines?
Because our hardware isn't fast enough to simulate a brain and gaining insight into how intelligence works is very difficult.
>Are there any neural network types that accurately simulate neurons as they would function in a brain, rather than in specific organized patterns?
Recurrent neural networks are topologically more similar to neural networks in brains, and they are state of the art in certain problems like speech recognition.
Hierarchical temporal memory and spiking neural networks also attempt to be 'more like the brain' but are pretty useless for actually doing things.
Trying to simulate/approximate the brain is probably the wrong approach to AI: too computationally expensive. (Most actual AI researchers treat biological inspiration with extreme scepticism.)
A better approach to AI is to look for INSIGHT into how thinking works and apply it, or find ways to combine modern connectionist/neural/ML techniques with old school logical/search AI.
This is what the people leading the field(Geoff Hinton, Jurgen Schmidhuber, Yann LeCunn, the alphago team) are doing, and it is working.
Nick Bostrom is interesting for sure but doesn't really understand what he's talking about. Your time would be better spent reading an actual AI textbook like pic related.