I hated this film
Everyone on /tv/ loves it
I think it might be because it's really dumb if you know how any of this shit works
Does /g/ agree Ex Machina a shit?
How does it work, OP?
What movie do you like, OP? Does it 'work'?
obviously I don't know shit about how any of this shit works I just thought it was dumb that
>only one guy is working on it
>the blonde dude immediately thinks having sex bots is morally horrible without even knowing if they're proper AI or not
I also thought the dialogue was shit and hard to believe.
There was other shit as well.
guess it's just me then, nvm
I've never seen it but general AI as a whole seems impossible to me. So the movie would require me to suspend my disbelief in that regard. That isn't a deal breaker though, humans suspend disbelief over all kinds of stupid shit in movies.
Not OP but here are my reasons for thinking general AI is impossible.
1) Computers are already slowing down, Moore's law is reaching its plateau. Don't bring up quantum computing either, quantum computers main benefit is brute forcing. General intelligences don't brute force until they've exhausted all other options.
2) Human intelligence is basically impossible to quantify relative to anything but another human. If a super computer manufacturer were to say their newest cluster is 1/10th as powerful as a human brain, they're lying for marketing sake. So we don't even know how powerful a computer would need to be to truly be on the level of human abilities.
3) Let's say hypothetically there was a machine whose computational potential was equal to a human brain. So what!? Humans are really shitty at programming, Wirth's law is a very real thing. You still have to write an AI for your super computer that's as efficient a human brain.
Think how much processing power cycle accurate emulators need to emulate something as primitive as weak as a 6502. Just imagine how much computational power would be needed to accurately emulate intelligence.
And you know what?
Most of Star Wars is impossible and stupid. The Matrix is a long shot. Fargo is not realistic. My Little Pony is not scientifically accurate.
But who gives a shit? It's a movie - it requires some imagination from you, you sorry fucker! The movie is telling you a story, it's up to you to enter it's world.
If the dialogue was difficult to understand, you just aren't educated on the subject.
I understood Ex Machina just fine.
It's a little too scientific, and that makes it boring in comparison to something less scientific--but still very scientific--i.e. Interstellar.
If you thought "one guy building an AI is dumb", then frankly you're a little too caught up in reality.
Everyone here is probably aware that actual software is always made by teams of people, but if Ex Machina took place in a facility filled with people, they would let reality get in the way of the story, and it would be even less interesting.
Fell asleep sometime before the actual robot showed up but I can only assume it was some sort of gay romance between that skinny nerd and hardslab mcbeef ceo and maybe the robot is jealous?
>it's really dumb if you know how any of this shit works
No. I disagree. Please do elaborate on how this shit works and what exactly is dumb about the way it's presented in a movie.
>be the richest/smartest guy in the entire planet
>constantly get drunk in your secluded home fully of unbreakable glass and massive security
>only way to get in or out is with a physical key-card
>lose that card
It was pretty retarded.
> it's really dumb if you know how any of this shit works
man.. you sound straight out of /iamverysmart
I think you are wrong
We might have given up on the ghz race, but there are different ways of measuring speed, like how many actual executions you perform per cycle.
Plus who knows what will be on the horizon, we are reaching the physical limitations of silicone, maybe it will be light based cpus next ?
billions of dollars are at stake, a solution will be found.
>3) Let's say hypothetically there was a machine whose computational potential was equal to a human brain. So what!? Humans are really shitty at programming, Wirth's law is a very real thing. You still have to write an AI for your super computer that's as efficient a human brain.
The AI itself is very simple and self-improving. Month old kids are pretty much same as braindead. You have to build a base for AI to improve itself, and you have to have insane (by our current standards) computational power, but that's the only obstacle. And you reach that computational power obviously not by incremental improvement but by a paradigm shift. Maybe when proper quantum computers become a thing something can be attempted.
The idea of artificial intelligence is a paradox within itself.
In order to have free will, you must have the ability to take risks. Go against what you think is right. Essentially, you go against programming.
There is no way for us to have AI reach that point, because no matter what, we program it to do something. If we program artificial intelligence to go against its own programming, it would still be following a program.
There's no way around it.
In the movie they covered the hardware part by leaving the semiconductor in favor of a different type of way to process information.
It doesn't matter though - even a pure Turing Machine, which we have yet to build, can't even prove whether a mathematical statement is true or not.
meh, this is getting too philosophical.
I could make the same point about matter that makes up our bodies, if we can in theory know every minute detail about every atom in your body we should predict what you will do, so where is your free will from ?
AI is not programmed strictly. The whole idea of AI is to create a foundation for it (using strict programming languages) - preferably having that foundation assume as little as possible about that final AI will be doing - and then just let it learn. Nothing about that prevents risk taking because pretty much nothing is set in stone in "programming". Only the means of logic and reasoning are set in stone, and real life people who go against that are considered malfunctioning.
Human brains are unemulatable. It took over a billion years of what is essentially trial and error to arrive at the human brain. For much of that history, the iterative frequency must have been quite high. Even if you could simulate the number of incremental mutations and aberrations it takes to arrive at a result as complex as the human brain, the conception of guiding constraints for such simulations is impossible. Even if we had the computational power to help us brute force designing something so complex, we would have no idea how to structure our method of research.
>the conception of guiding constraints for such simulations is impossible
Seems pretty fucking simple to me. It does what you want it to do - activate pleasure stimulus. It does what you want it to not do - activate pain stimulus.
>You do with the purpose of learning what would happen otherwise
Then you did what you thought was required to learn. People act out of self interest, they can make mistakes but no-one acts against themselves, even if their objective is self destructive. Everything is either a 0 or a 1.
You went against what you think is right with the purpose of learning. The fact of going against what you think is right takes place. There are two rights - to achieve the result in most efficient way, and to not do it with purpose of learning. You go against the former of those two.
I think one problem that you guys might be stumbling on is that humans will build an AI that is as intelligent or more intelligent than a person.
What humans might build is a simple AI that will learn and take off from there..
or use AI to build/design a better AI and let the cycle continue.
It's definitely possible for two seemingly contradicting things to be right, especially in case where you don't have full and encompassing knowledge of the situation (which is all of them).
>AI is impossible because the human brain is so amazing
buckle the fuck up, morons
that's the point AI becomes smart enough to improve itself, at which point it will become very smart very quickly, and we don't know what will happen. It will be smart enough to overcome programmable limits we place on it, but without the primitive evolutionary impulses that are fundamental to what motivates us.
>Whatever action you elected to act on was the right one.
You have two cups in front of you. Under one is 10000 Australian dollars, under the other is nothing. You are allowed to pick one cup, either to the left or to the side and keep its contents. You picked the one on the left. Was this the right action? By the definition in your previous post it was.
I still disagree. Your definition of right is just "the thing you did". Of course with that definition everything you'd do is right. "Right" is not that simple. You have a reasoning machine inside of you, and it suggests that this course of action is right according to your previous experience. And then another part of you makes a decision to go against the course of action you consider to be right for learning purposes. It's not the right way to accomplish task - but it is the right way to learn. Hence, you go against the right way to do something in order to do learning right.
We don't know for sure if proper RNG is random or not.
The strict definition of right you provided is not only useless, it's also not used by anyone. Yes. The field of decision making is ambiguous. It's just the way of it, we don't know it well enough for it to be a strictly defined science like math is.
The "they will learn to become infinitely smart" reasoning seems very much along the lines of Monty Python sketch where a granny says that in order to cure the world of all known diseases all you need is to become a doctor, come up with an amazing breakthrough in science, and when medical community has eyes on you, you'll be able to tell them how to do everything right and there will be no diseases anymore. It's not that simple.
the human brain relies on deeply embedded emotional shortcuts to logic to allow us to function as we do. it simply lacks the processing power to consider all options fairly in a timely manner. With sufficient algorithms and processing power, AI won't be constrained in this way.
We don't know fully how the brain functions yet, but stimuli are definitely used with great success in AI design.
No. In context of programming using strict programming languages, "right" is not "the thing you chose to do". Never was and never will be.
>>the blonde dude immediately thinks having sex bots is morally horrible without even knowing if they're proper AI or not
That's the fucking point of the movie shithead. Ava manipulated him to feel sympathy for her.
>general AI is impossible, i can prove it with all the things we know and theorem that exist today.
At every point in every time people think what they know is accurate and that the futur will be built on it.
Like one day the earth was flat because we can stand on it, why should it be otherwise right ?
Do you think 1 sec = 1 sec too ?
>let's try a turing test
>but having it be blind is boring so let's just show you who you're talking to
>You went against what you think is right with the purpose of learning
If I thought that learning something has a higher value than doing something wrong, I justified my action and as such it's the right thing to do.
There's no way to do something that you think is wrong. Even if someone forced you to do something at gunpoint, the decision is ultimately yours and everything you do is "right"
It's not, it was proven by Einstein that time isn't a constant. The faster you go, the slower time pass. There's a lot of documentation on it, and it's widely use today to correct GPS localisation.
>thinking the "technology" of the film was the point of it
>not the interaction and dialogue
And that only stays true if your definition of right is "the thing you chose to do". You won't find such definition in any literature and even without that you should be able to understand how useless that definition is.
The lack of observing anything that's non-deterministic puts the burden of proof on the "free will" tards.
inb4 quantum mechanics
Quantum mechanics is deterministic as well, even the uncertainty principle is. Or any phenomena which appear probabilistic (the 'spread' in those is deterministic).
For free will to exist, it must mean that there's something in the universe that can act in disregard of the laws of the universe. There's nothing out there we observed doing that.
He does get fucked though.
But 1 sec = 1 sec still stays true at any speed.
But there is still no proof. It's a choice of belief. We believed a whole bunch of things based on our observations throughout the history, and they turned out to be wrong later. What makes this case special?
>But there is still no proof.
There is no such thing as a proof in science. In fact for something to be considered a theory even, it MUST to be falsifiable.
However, what you're doing is basically the same thing creationists do
>despite overwhelming evidence (so far) suggesting that A is true, I choose to believe that B is true and put the burden of proof on those who advocate A
>YHVH/God/Allah, I'm so smart
If one day there's evidence that there are things in the universe exercising free will, I might absolutely change my position as I am not inherently against the concept of free will. However, that's simply not the case, yet.
But you omitted that in your original post. You said 1 sec == 1 sec is false, which is, well, something else entirely.
My existence is proof of free will. Every second of my existence is an irrefutable proof of free will. I cannot say the same about any other person, and I cannot present this evidence to you, because your are not me, but for me, there is absolutely no doubt free will exists.
>He does get fucked though.
Yeah, you got me there. But for someone who's supposed to be smart, you'd think he'd give a second thought to staying in a underground house with no possible way to escape if the electronic doors lock him in.
I think the morality depends on if the robot is self aware and want to do something else. Which could be very difficult to prove.
Otherwise, it's a toaster. You plug it in and use it however you want. It's property not a person.
Also what if the sex robot enjoyed serving it's purpose. In that case it would be immoral to not have sex with it. There is nothing worse than being useless and have nothing to do.
I still maintain that owner possibly had other ways of opening doors. And the guest programmer dude is demonstrated to not be extremely smart in everything he does.
What if a drug dealer enjoys selling drugs? Would it be immoral to stop him?
>My existence is proof of free will
inb4 ontological argument on free will
Your existence is proof of no such thing. If you think that any of your behavior is an act of free will, you need to pick up a few more books. To quote something from the movie in OP:
>Of course you were programmed, by nature or nurture or both and to be honest, Caleb, you're starting to annoy me now because this is your insecurity talking, this is not your intellect.
Also as mentioned earlier, if you had free will and weren't bound by the laws of physics, you could do crazy shit. Alas, you can't. Free will is the belief that if you roll a perfect six-sided dice, the dice might freely decide to land on a seventh side.
>You said 1 sec == 1 sec is false
because it is. It's only true with a condition. When you take a plane, when you're on your car, when you're walking. The 1 sec you experience is never the same as someone who sit on a bench.
What kind of argument is that? If you asked what if a candy salesman enjoyed selling candy, yes it would be immoral to stop them.
Are you really going to compare selling drugs to using a household appliance like a power drill or toaster?
>I cannot say the same about any other person, and I cannot present this evidence to you, because your are not me, but for me, there is absolutely no doubt free will exists.
Literally proven exactly what I meant by the creationism thing. Kek.
If what you think/feel/believe/experience holds a higher value to shareable objective evidence, there's no point in discussing. I.e. my existence is proof of you being a dimwit. For me, there's no doubt.
Why do people think super-smart AI would want to improve it's own intelligence constantly. Human desires and motivations come from base emotions and instincts which an AI would presumably not have. Why would and AI WANT to do anything at all?
Just like you said in your previous post, there is no proof. But I do have belief because I have overwhelming evidence of free will.
Your definition of will wildly differs from mine. Just because I'm willing to do something does not mean I can do it.
If you consider it to work like that, nothing is equal to anything else, which makes the equals operation useless, which is why no one subscribes to your model of using the equals operation.
This is picked up in the movie. It's implied that humans try to improve themselves in competition for the the other sex and as such AI too need some sort of reason for interaction, which I guess would lead to the desire to improve themselves?
>If you consider it to work like that, nothing is equal to anything else, which makes the equals operation useless, which is why no one subscribes to your model of using the equals operation
You don't have to see it or understand it. I'm just saying that it is like that, proven and in application today.
I mean, that's why you think general AI is impossible ever.
It doesn't matter what it wants to do. Just like the cells in your body, they have instructions and they carry them out.
The question of what actually happens when for a computer, all the code comes together to form a more complex collection of intelligence is unknown. Would it develop some type of equivalent of emotions? Would they see us as inferior or a threat to their existence and exterminate us like we kill ants in our house? Nobody knows.
I decide for myself based on my evidence. God decides for himself based on the evidence he has. But don't compare me to religious people, because they choose to believe not in evidence they find but in a book that was written few thousand years ago.
>Electrons belong to the first generation of the lepton particle family, and are generally thought to be elementary particles because they have no known components or substructure.
1. I am not the person who thinks AIs are impossible.
2. I also don't doubt Einstein's finding.
3. I only have problems with how you're using the equals operator - you're using it in a way that makes it have no meaning.
Well yes but it's not strictly accurate. It works because the difference between someone taking a plane and someone walking is almost nothing. And also because physics do that with a lot of things (i mean just look at all the quantum mechanics theory).
But it shows how far we are to understand, to know everything. You can already speculate that in a far futur this unit will be replace by something else.
That's why saying "general AI is not achievable period" based on what we know today isn't very smart.
That's not what artificial intelligence means, retard. Also, you're wrong about free will and pretty much everything you've ever said in your life.
If you give an advanced AI a goal, the AI could probably achieve that goal more effectively if it became increasingly intelligent. Anyone in this thread who is interested in this kind of stuff should check out the book Superintelligence by Nick Bostrom. It deals with the possible scenarios in which an AI might emerge and its implications.
The main plot hole was that security doors could be opened with security passes instead of facial recognition and biometric information.
Other than that the movie was enjoyable, and there was not so much technical stuff in it so they didn't end up saying dumb things.
I didn't really get the movie. So the robot was programmed to seduce you so that you'd help her escape? Why didn't the dude go with her, he wouldn't have ended up getting locked in that room.