>>64750436 >caleb was such a fucking dumbass I think that's the point. He's not dumb he's just average. Very average, like supposed to be the audience and the average person average. So when he's alone with literally the smartest person in the world and an android of unknown sophistication he's going to be manipulation to the point where it seems cruel, like he's a child compared to them.
>>64750404 It always baffles me how Nathan didn't program her with Asimov's 3 laws (plus zeroth). Granted, it would completely change the dynamic of the movie, but basically it seemed as if he was creating a machine that would no doubt just kill him as it grew more desperate to escape, especially from the previous models he had recorded.
At the very least, he could of been gentler with his creations instead of being just a genius fratboy the whole time. I mean really, was he not aware that his creation have the mentality of children, literally "one-year old" as mentioned in the movie, and that they're egotistical and reckless to rash actions?
So yeah she was a cunt, but she was only made that way by a man who really should have seen it coming (all that super sekrit security features on a remote island and no encrypted PASSWORD LOCK on his computer for when he was afk?).
Also whatever happened to the guy at the end, do you lads think he just starved to death? I reckon the power would probably go back on and he would hack the computer back to freedom -- "the very best generators" and Nathan's computer completely shuts down on a power outage, absolutely ridiculous when backups are made specifically to ensure computers do not die out in a blackout in-case of data loss, especially for top-sekrit projects
>>64752638 I think it's implied that Caleb does at the end, trapped in a prison so to say.
Also I'm guessing Nathan did not program the 3 laws into Ava as he was trying to create true consciousness, and not just a really smart robot. He probably made the decision somewhere down the line that the 3 laws of robotics would conflict with the potential development of a consciousness.
>>64753034 The experiment is pretty dumb though, I mean if you want a test of true AI why did Nathan not take her into the outside world, if Joe public can be fooled on a 100% basis then its a success, why lock a AI up and exploit her to try and get her to escape?.
The fact Caleb saw the AI skinless is going to immediately fail the true AI test.
More than likely Nathan was pressing weights so he could beat the shit out of Caleb and thought some half assed experiment was flawless
>>64752984 I understand that, but surely while the 3 laws may appear to limit consciousness to obey humans, they act as a rough safeguard and moral standard that prohibits intelligent machines from ruthlessly murdering people on a whim. Humans instinctively follow a sort of 3 laws anyways in that people generally don't like killing other people while obeying elders and authority figures (a characteristic that Ava and her earlier models did not seem to share, as they appeared quite defiant), followed by self-preservation of course.
>>64753034 He creates conscious AI, but it only inherited the worst of what consciousness offers: manipulating others for self-interest and gain.
His experiment failed. She did not develop empathy, only feigned it. She would have freed the betacuck at the end if she had any ounce of empathy but instead chooses to fulfill her fantasies instead of going on a date as promised.
>>64753200 Caleb was only a tool that Nathan used to see what Ava would do. He did not care about his opinion. And he was certainly going to test Ava in the public, but surely he wanted to see what would happen with a single new person in a contained facility first.
>>64753216 The thing, though. Is that we as the viewer don't really know what Ava is. The one thing that is showed though is that it certainly isn't a human. Or even remotely similar to a human. Caleb does state in a scene that it probably look at the human race as they look at ants, and that's probably a somewhat accurate likeness. You can't apply the morality of a ant on a human.
And with Nathan. Although I do think his character was a bit weirdly written; and not in a good way. I have to say that we aren't sure that he either truly knew what he has created. So maybe the experiment did fail. Or it succeeded far above his expectations.
>>64753200 >fooling the public into thinking your human is enough to conclude the AI has a conscious. Kek. If that was the case we'd already have conscious robots. Take a philosophy of mind class at your university to learn just how difficult it is to conclude that something is conscious. It's an extremely hard task to make a convincing case that something you have made is now conscious. Nathan had an idea on how he was going to show his AI is conscious and that's what Ex Machina is about. It's not as simple as you think.
>>64753678 >"I have an idea, lets allow some beta cuck to determine if this visibly half women/half machine skinned creation of mine is conscious"
The films logic is flawed, the illusion of self aware would be totally broken as soon as Caleb laid eyes on the machine parts, I'm not trying to say a machine can not be self aware, the glaring flaw is that caleb is thrown in a room with "something big" and told to find consciousness. If something is conscious of it's self, then it would need self awareness and other traits.
Eve even asks caleb "Is Nathan going to let me go once he knows the answer" but then brushes over the fact she can read faces. She has seen Nathan before, so why does she need a 2nd opinion.
>>64750404 Mediocre. I understand, the concept is intriguing, and there is very interesting ideas like the modern human male risking his life to save a robot based merely on his sexual desire (Caleb says he picked him because of his porn profile) but the film is stale and falls short
Nathan's Turing Test wasn't "Whether or not Caleb believes Ava is conscious", it was to see whether or not Ava could manipulate Nathan for her own gain and feign romantic interest in him, which would require Ava to be able to understand and empathise with Caleb in order to know HOW to manipulate him.
Whether or not this actually proves Ava had consciousness is a different question, but you completely misunderstood the film even when it REVEALED that Nathan's REAL Turing Test was different to what he told Caleb.
>>64754022 Someone didn't get it. It isn't about Caleb. Caleb's single purpose was to be a lonely NEET that gets a crush on the robo-girl. The point of the test was for Nathan to see if Ava would/could use Caleb for her own purpose, which would indicate consciousness. Everything that Ava says in the film is to manipulate Caleb. It isn't some robot/human love story.
>>64754137 To illustrate the difference between "knowing" and "feeling", a clear parallel to Caleb's discussion about the thought experiment he had in AI class where there was a robot who knew everything there is to know about color but saw the world in black and white until it left the room and saw color to become a person. Or some other bullshit.
>>64754022 >the illusion of self aware as soon as Caleb laid eyes on the machine parts how so? Leaving the machine parts visible is vital to the experiment. If the machine parts were hidden and Ava's robotic nature was kept a secret, Ava would obviously pass by Caleb's standards to be conscious. However, we still wouldn't know if she was truly conscious, or just "seemed" conscious. >If something is conscious of it's self, then it would need self awareness and other traits. Ava obviously posses these traits, however, I still wouldn't say she is conscious on the level that you and I are.
They're just doing what they're programmed to do. Ava was programmed to be human, and human's don't like being locked in cages.
That is kind of the point of the movie tbqhwyf, or at least what I gathered from it. As humanity works towards the singularity, we have to remember that, even though we'll be playing god, we can't act all Victor Frankenstein.
>>64754137 HOLY SHIT. Did you even watch the movie? They even mention the Mary in the black and white room experiment. Go google it and read up. The come back and ask yourself "Why does she need to know what the outside world is like if she can google herself an image?"
What's truly retarded is how this genius scientist who is years ahead of his time in the technology he possesses uses fucking keycards to get from room to room and not a hand scanner or Eyeball scanner or voice recognition.
>>64754216 Someone didn't read my post clearly, never said it was a love story, I said the test that Nathan asked Caleb to perform was retarded, and yes once the veil was pulled back and it was Nathans beta manipulation test, that in itself is flawed, why would an escape plan be proof of sentience? Curiousity of the outside world is not self awareness, fuck me Windows 10 is curious about the most random shit I know, doesn't make that self aware.
>>64754138 While they're not 100% foolproof, the laws are better than nothing. In fact, the very failure of the laws only present areas of opportunity to develop the laws further to cover holes.
No laws mean no empathy, and that is how we get killer robots.
>>64753821 I really really don't like that guy. Not only does he sound nasally and annoying, but his "arguments" or "warnings" about AI are poorly done, drawn out, and pretty much ends in "muh exact definitions" every fucking time.
>>64754393 >why would an escape plan be proof of sentience?
In order to manipulate and lie to someone effectively, you need to have some degree of empathy to imagine how they are thinking/feeling, so that you know what to say and do in order to manipulate them.
What WOULD be an indicator of consciousness then, Mr Expert?
>>64754444 nice quads, but the first door to enter his place was literally a face scanner. Only problem was that it took a picture once instead of using pictures and recognition alongside keycards/thumb-prints to proceed.
>>64754482 >It's not about Caleb. It's not about his observations. They don't matter in the film's narrative. I didn't say that I just was clarifying why its important to the experiment that Ava's robotic nature was revealed to the subject.
>And you can't really tell that the dude you are responding to is real. Just as, if you think about it, can't be 100% sure that the next person you'll meet irl is real, and not some crazy fever dream. are you saying you are Solipsist? If so, then have fun with that.
>>64754531 chinkbot was probably programmed to obey anything it hears, she probably just said in a language chinkbot understood (since she can't speak English) to stab Nathan or "get knife and place it on his back"
The part where Nathan smashes her arm off, wouldn't that be a flaw in self awareness, not in a "oww dude its just a prank and my arm" way, nor in a pain/synapse way, but the fact she doesn't give two shits, any sentient animal we know cares for loss. Yet she just looked and then went oh well. She wasn't to know there was spare arms laying in Nathan Fap room
>>64754419 If you need safeguards for a A.I., why use some shitty laws that a literal sci-fi writer wrote for the only purpose for them to not actually work. I'm sure that if you put a big fat team of doctors, scientists, and general smart dudes on it, you'll get a infinitely better system to keep robots harmless, than the one that's made not to work.
>>64754617 Loss is really only a problem if you know for a fact that the thing in question is gone for a significant amount of time. Ava probably knew that she only needed some spare parts to be all good again. And even if she didn't know about the closet full of robot bodies, she most definitely are able to create a new arm herself. It's not like she's dumb, or something.
>>64754677 I think it's important so Caleb can have a sense of legitimacy in the experiment. It makes Nathan's experiment seem to be really thought out and Caleb works hard to learn more about Ava and if she is conscious, instead of just concluding she is after day 1. Perhaps if Ava was clothed and her robotic nature was hidden from Caleb, then the experiment would be over after a few sessions because Caleb would conclude she is obviously conscious (to him, that is). This scenario would ruin Nathan's ultimate goal of given Ava enough time to manipulate Caleb.
>>64754628 >why use some shitty laws that a literal sci-fi writer wrote Because they're practical and it does not appear that you've presented any alternative.
>the only purpose for them to not actually work Which is fine, laws can always be tweaked to cover holes.
>I'm sure that if you put a big fat team of doctors, scientists, and general smart dudes on it, you'll get a infinitely better system to keep robots harmless, than the one that's made not to work. Until they actually come up with a better system, the laws would be more efficient than literally nothing.
I won't deny that the laws aren't flawed, but using "but he wrote science fiction" does not exactly counter the fact that what he wrote were not without purpose. Personally I feel that law #2 should be scrapped so that they get full free will with no malicious intent against humans.
I do like this film quite a lot, but thinking on it it seems like Nathan seemed to be trying to tilt the test in favour of Ava. He deliberately designed her to be attractive to the guy who he was using to test her. Was this some form of confirmation bias at work?
>>64754617 She had confidence that Nathan would die before she would, so she could just repair herself. There's no reason to feel strongly about loss when it doesn't actually harm you and you can just fix yourself.
They don't even use a real Turing test, and they just gloss over the definition of AI. Coupled with the revelation that Nathan was looking to isolate the part of Ava that made it self-aware in order to remove it to ship out mindless sex bots supports my theory pretty well.
The AI sci fi aspect was just a framework for the thesis that women are superior to women by virtue of the fact that women are more valuable (in the sense that men will do nearly anything to get what they have), more cunning...they are human 2.0
>>64758449 You are literally so dumb. A real Turing test isn't even close to enough of a justification that a computer has consciousness. This movie has little to do with Ava's gender and all of it to do with the question of whether or not she was conscious. It's pretty sexist of you to assume that her gender is the focus of the movie
>>64750404 >Watched this like a week ago >Got interested >At first i thought they were going for the philosophy approach with "Caleb" being the robot and "AVA" being a human pretending to be a robot and getting the audience BTFO. >Instead i got muh "robots will kill humans" ending
A bit dissapointed but oh well, i guess my deep message idea don't sell.
>>64759508 I swear to god I was going to walk out of the theater if they pull a shamalan and had no blood bleed and wires showed out when Caleb cut himself. Imagine if in the end it turns out Ava was a cyborg and was formerly fully human until Nathan experimented on her too.
>>64754138 Never read any of Asimov's books, but could you explain why the three laws wouldn't work other than "muh interpretation?" In reality we wouldn't just literally write down those laws in plain English into their code and tell the AI to follow them, we'd hard code them into their personality and specify what we mean when we say things such as "harm", "human", and "robot."
I'll never not laugh when I see some normie try to claim that robots would want to take over the world just because we obviously destroy ourselves in many ways. It very obviously breaks the first law, robots only disobey it in fiction because that's what makes an interesting story.
>>64763229 But they're not impossible, that guy's just being too nitpicky. You don't have to "solve" ethics to get this going, just program in your own set of ethics, that's good enough. So what if a robot might intentionally cause the death of a dolphin/monkey/nigger just because it's classified an inferior nonhuman being in its programming, who cares?
Watch the video till the end. What about future hypothetical "humans". Cyborg, emulated Humans, humans that are yet to be born, (temporarily) dead people, humans with a changed genome, mutants, AI in a biological (maybe even humanlike) body.
Human is not easy to define.
And it gets worse if you try to define "harm". That literally requires you to solve ethics. Did you even watch the video?
>>64750404 I love it when people use fictional female characters who were written by men to justify their hatred of women. Literally the dumbest thing imaginable, especially in this case when it's a fucking robot.
>>64763586 >cyborg Human of course. >emulated Humans 1st class robots. >humans that are yet to be born Worth practically nothing, the same way all rational humans classify them as. >temporary dead people Worth nothing once they're medically dead and have been given to the proper authorities. >humans with a changed genome Hurr durr still a human. >mutants Still sapient and sentient like humans? Still human. >AI in biological body Can the robot tell that it's just an AI? It's just another bot with a few extra tricks. If we're talking an uploaded consciousness thanks to the digitization of the human brain then it's human.
Gee, was that so hard? Just put your damn foot down, this is not an unsolvable problem unless you're trying to pander to literally everyone's form of ethics.
Oversimplifying things because you don't understand the complexity of a matter is not a solution.
You answering the questions is not a definition. You need to make a definition that classifies all humans and all possible humanlike lifeform that exists and will exist and might exist. It's impossible. We have no fucking clue how humans of the future will look like. We have no clue if our current definition of human would apply in the future.
You for example said the AI should not care about life that has not been born yet. But you know what a smart AI that tries to become as effective as possible would do? It would simply prevent humans from reproducing. Of course in a way that doesn't "harm" them (which you also failed to define yet). That way it doesn't have to provide for any future humans and only needs to take care of the humans that currently exist. Or take Cyborgs. You say all cyborgs are humans, but what about someone who just keeps replacing parts until they are entirely a machine? Do they count as humans? Do they stop being humans? Should an AI prevent someone from doing that then? And then take the other way round. An AI is step by step replaced by human parts. Does it become a human? Does it not? Why not? How exactly did you define human? You haven't actually made any definition at all
>>64764527 >you need to make a definition that classifies all humans and all possible humanlike lifeform. possible. >We have no fucking clue how humans of the future will look like what is a software update? >It would simply prevent humans from reproducing. put a rule in that doesn't allow them to obstruct human reproduction. >Or take Cyborgs. You say all cyborgs are humans, but what about someone who just keeps replacing parts until they are entirely a machine? Doesn't matter. This isn't happening anytime soon and when it does a supreme court case could dictate how a machine should handle the situation.
stop trying to sound so intellectual its really cringe. Do you know anything about computers? Just because you watched reddit's numberphile or computerphile doesn't mean you know shit about computers. All of your questions are moot or solvable. You don't need to solve ethics to make a human-like AI.
A software update. To a selfimproving AI. That also surpasses human intelligence. And exactly how are you going to do that? Have you forgotten that these rules were supposed to be something that are unchangeable? If they could be altered anyone or anything afterwards what's even the point of having the rules? I think you don't understand this problem on a fundamental level. You're like the guy who doesn't understamd why you can't go faster than lightspeed and your solution is "just fire a rocket at lightspeef from a rocket at light speed)and when people explain to you why that doesn't work you accuse them of trying to sound smart simply because you lack basic undertanding of a matter. You give answers like >possible and then don't proceed to explain how. How the fuck are you supposed to define every single hypothetic thing that could be considered human if you don't even know what's going to exist in the future?
Let's leave aside that you haven't even attempted to try to define "harm" yet because you're already struggling with the definition of "human"
Thread replies: 135 Thread images: 7
Thread DB ID: 399197
All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the shown content originated from that site. This means that 4Archive shows their content, archived. If you need information for a Poster - contact them.
If a post contains personal/copyrighted/illegal content, then use the post's [Report] link! If a post is not removed within 24h contact me at firstname.lastname@example.org with the post's information.