This scenario will not actually occur. The car will detect potential danger far enough in advance in order to not have to make the decision. Self-driving cars are orders of magnitude more safe than people-driven cars.
But let's answer the question anyway, it depends on a number of factors. Are we to say that each human life is equal, or are some lives more valuable than others. Should a limosine carrying the president crash in order to save the lives of a handful of random passers-by? Should the person who's using the car get a say in whether or not the car does this? Should they be able to pay more for self-preservation functionality?
Because people are fucking dumb as fuck there'll be a massive backlash against self-driving cars and they will be extremely heavily regulated, meaning we won't even get a chance to answer many of these important questions.
>>52576131 >But then the drivers are sentenced and judged accordingly on how they acted and what was their reasoning. lolno, all they have to do is claim that they had a "medical condition" and they get cleared, even if they kill a whole family.
>>52575986 >>52576019 This covers why the AI will preserve the car and its passengers first, unless there's an untapped market for selling products to people who are suicidal, but too lazy/chicken to an hero themselves.
You can sentence and judge people who programmed the car, and there's plenty of precedence for this too. Virtually every industry where processes are automated this shit already fucking happens. It is literally no different whatsoever.
IMHO the car should ask the passenger for attention and possibly to take control even if it can brake. If it can't, just keep on the road and let da world burn. If people don't keep crossing the road in inadequate ways, this is not going to happen.
Such a car shouldn't go that fast that it can't stop properly anyway, it should anticipate things long in advance. If it can't brake in time, it's surely going too fast to turn so better continue straight ahead. Also don't think a collision with a wall is deadlier than with a person.
Regardless you're an idiot if you want your car to drive itself
>>52576365 the problem is that OP is always a troll in these topics and these "hypothetical" scenarios are bullshit because read the fucking thread you cunt >muh scenario doesn't happen >b-but muh scenario doesn't happen >b-but p-please imagine muh scenario doesn't happen >b-but... go fuck yourself
>>52575954 Autonomous vehicle should never make a decision resulting in the passenger's death/injury.
In this scenario (which will never happen, in an area where there's so many pedestrians vehicle would have to adjust the speed to ensure minimal safe braking distance.
But if MUH SCENARIO happens then the vehicle should calculate the most optimal way to ensure minimal injuries to the pedestrians ie. slam the fucking brakes, aim the front of the car where there's least amount of them and pray for the best.
>>52576131 What are you talking about ? Roads are governed by a code of conduct, self driving car will respect this code. If a pedestrian doesn't follow the code and get killed it's entierly his fault ; again it have nothing to do with self driving cars
>>52576423 The thing with all these scenarios is that they're one in a million occurrences. It's a way of worrying about the small stuff when thousands of people die every month to human driven cars. A 9/11 happens every month and people are fretting about the one in a million accidents.
>>52576449 literally this problem doesn't arise, since the rules of driving are pretty fucking formally defined, code car to follow said rules. Why do we even care if some nigger jumps infront of the car and gets killed because he's a nog ?
the scenario in the OP is shit and is only a meme it will never happen
Why do people think this would ever be a problem? If it can stop before hitting a parked car, it can stop if there is a group of people in the way. If a group of people all decide to jump in front of a car, swirling to the side would be a human response, not something you would program into a car. Stopping as fast as it can would limit the damage to both the car and the people in front. driving to the side would potentially hit a lot more people.
>>52576507 >shitloads not even a small percentage of actual accidents that happen daily not worth mentioning also you know who else is involved in traffic accidents and not at fault? almost 50% of all people involved in traffic accidents are not at fault
I think the bigger question is going to be who is liable when someone is inevitably killed by a self driving car. It might be in every manufacturers contract that you are solely responsible for what happens.
Also what about laws regarding the distraction of the driver? are you free to use your phone or eat a sandwich while your car drives you around? Thats going to be hell for police forces to deal with if they dont know that the car itself is self driving when they see you doing something dumb.
>>52576021 I'm sorry, but what? The president is just another faggot, like the rest of us. He's no more special than me or you. His job is literally to make sure we, the people are taken care of. If anything, he should be the one to do the sacrificial limo driving, not random people. Here's my question to you: are you okay with giving up your life for the life of the president in such a scenario?
No disrespect to the president, but I'm certainly not. I'm still young and have shit I want to do (75% of which is masterbating, but still).
I talking about a malfunction occurring, the car doesnt stop and you hit a car. Is the manufacturer solely responsible? We are surely going to see some kind of injury involving a malfunctioning self driving car.
>>52576056 Those sensors are rendered useless in the slightest bit of rain/snow/ice/fog. They're not some sort of magical device you pit into a car and go "well, I'm totally safe now". Your also forgetting that code can have errors and bugs in it. Think about how complex something like Windows is. Now look at how many bugs it has, you can basically quadruple that (if I'm being generous) and get to where this shit will be. Also, electronics malfunction all the time, so what happens when the chip that is used to relay the info of a malfunctioning component also malfunctions while going 100 km/h on the highway?
>>52576594 This. Planes and trains require someone to be near the wheel at all times and the pilots are responsible for being ready to manually operate at a moment's notice. That's probably what they'll do legally until the technology reaches some next level shit years from now.
>>52576458 >>52576539 That's a really fucking stupid way of resolving the problem. One of the main selling points of computer controlled vehicles is that they can process information and react quicker than an average human; why defer the decision to someone who will react slower, even slower than usual since they're likely not as aware of the current conditions as the driver of a regular car?
The car should probably engage boosters and fly over the pedestrians at this point, but the geniuses at Google X don't want to work on that problem. Driving, being something that humans are capable of, and enjoy doing, is CLEARLY the problem we need to be solving here.
In all seriousness, however, the car should never crash into a wall, because it has no way of knowing what's beyond the wall. You could very well end up killing more people that way.
>>52576495 With human-driven cars there have been situations where a human has had to decide between two outcomes in which different people would die. What makes you think this will be any different with self-driving cars?
Assuming the car perfectly follows the law and does anything perfectly, this kind of scenario would be 100% caused by the mistake the pedestrians made when they decided to jaywalk or jump in front of me.
Why the fuck should I die if I did nothing wrong, just because some retard put himself in a dangerous situation?
An example of this would be a train... As you already know, it's very hard for a train to brake in a very short amount of time and without sliding many meters further. Since trains only run on their tracks, in case someone is about to get run over, it would be absurd to implement some technology to derail the train (putting the lives of the passengers and other bystanders at risk), just because some retard decided to put himself in that situation. Even if the train only carries the conductor and he's a scumbag with a terminal illness, and the pedestrians are the top 100 smartest and gifted kids in the world, and the passenger's life is clearly less valuable than that of the pedestrians, it would be an injustice to kill him because of the decision of the kids (or whoever put them there).
It's not a matter of "whose life is more valuable", but more of a "who deserves to die in that situation".
Also not only this is an extremely rare occurrence, but nobody would buy a car programmed to kill its passengers.
>>52577311 Also the trolley dilemma is a completely different situation, since the people tied on the tracks are all "equally" deserving to die, so it's better to only kill one instead of many, because (unless specified) they're all the same.
The other dilemma about throwing a fat guy from a bridge to save the workers is also a moot point. Unless the fat guy is responsible for the situation, there is no reason to sacrifice him for the sake of those people who aren't there because of him. It's the same as the sel-driving car problem... Why on earth would you kill someone innocent to save the lives of people who put themselves in that dangerous situation (or got put there by someone else who's not the fat guy)? It's ridiculous.
>>52578097 An automated car will handle a road ice emergency better than 99.9999% of all drivers. The only people that do better are the two dozen drivers who are on the level of professional rally drivers.
>>52576472 Not trump guy, but why the fuck are you tripping right now? The only reason to trip are 1. You are OP so inheritly you are a faggot, and 2. You have a meaningful comment/questions that requires others only in this thread to verify who you are. And in case 2 you only trip for for that comment/question/follow up that needs a trip, otherwise you post as anon.
>>52575954 It's funny to think about when you consider all the people who buy SUVs because they "feel safer". IE, if they are involved in a collision with another car, the people in the other car will die, not you.
>>52576356 >You still have something called "engine braking" but you of course know shit about youre and ameriblubber driving autotranny kek As a euro poor you would be unaware that modern automatics can downshift applying an engine brake.
>>52578899 It should be the opposite dumbass. Humans on the motorway where you don't have to pay much attention, and AI in the street where the lightning-fast reflexes of the car keep your slow ass brain from making accidents.
In this particolar case AI is literally superior in every way to the human brain, and the more self-driving vehicles are on the road, the safer the roads will be.
>>52578931 the difference between your old as fuck post you linked and the situation being described is that the self-driving car COULD automate an engine brake, considering it has control over all parts of the car.
A normal human doesn't have direct control over the automatic transmission and thus can't perform a downshift unless they are already braking.
>>52579114 >Driving through the city is stimulating enough to stay focused. So is motor way driving.
But I'd rather not have to do it. I could be working, eating, playing games or shitposting on 4chan instead of looking at highway for 3 hours straight. And I could be doing the exact same while driving through city traffic.
Oh, and if I don't want to pay for parking I can just tell the car to fuck off home and pick me up again when I'm ready.
>And good luck waiting for self-driving cyclists, pedestrians, playing kids, dogs, etc. They already "drive" themselves you fucking idiot.
>>52579120 >Except when the AI makes a mistake it can't be held legally responsible Of course it can. The company that made it is held responsible, just like if any commercial machine that causes death.
>It can't make such decisions according to basic morality, ethics. Of course it can.
> Trains and aeroplanes both default to driver when shit hits the fan for same reason No, they are just cheaper and safer than being automated for the time being.
>>52579114 It doesn't matter. Obviously AI should be used in both situations, since it's far supirior to the human option.
>>52579120 That's a non-issue. There are a lot of dangerous automatized things that kill people already. I don't see how that's different. Also there cars perfectly follow the law, and that's a one-in-a-billion case.
Plus, morals and ethics aren't an issue if the car makes no mistakes. See: >>52577311
>The company that made it is held responsible Companies are different legal entities in face of court of law than civil people, not same laws applies and it's a crash so it's a civil issue (unless malfunctioning, but that's not the same, even if AI did the same decision as the driver, it still could not be tried because it's a programmed blob, non-entity)
>>52579202 >here are a lot of dangerous automatized things that kill people already. I don't see how that's different. Because civil people != company. Because car crash civil issue. Because even if it DID NOT malfunction, it would sitll ahve to be applied the law system and ethics and IT COULD NOT BE since it's fuckign A.I.
>>52579296 >Yes there is when you crash and you need to follow the insurance procedure and then maybe go to court if there's a claim for the other party. And what's the problem with this? If the AI was at fault, then the company that made it has to pay out.
>Only if the device was malfunctioning, which isn't the case here since crashes happen. Nonsense. If the AI breaks the law, then the company is liable. Just like if i buy toaster and it blows up in my face. Maybe the designers didn't really care if the toaster blew up sometimes, maybe it's not a "malfunction" to them. Doesn't matter, they are liable by law. If a crash happens and it was because the AI broke the law or acted in an irresponsible way, then the company is liable for the damages. If the crash happens, and the car didn't break the law and acted as well as any human could be expected to act, then it is not at fault, and neither would a human driver.
>and A.I can't be held responsible the company can. That's like saying companies can;t sell automatic toasters, because if a toaster explodes and kills someone, it cannot be held liable because it is not a person. The company is held liable.
>>52579376 >If the AI was at fault, then the company that made it has to pay out. Because of your fucking ignorant assumption of 1. Crashes never happen. 2. if AI crashes, it's malfunction so you can sue company instantly. The A.I would need to be able to handle crashes (they happen) but again it *can't*.
>Nonsense. If the AI breaks the law, then the company is liable Yes if it was clearly malfunctioning. Like doing things it wasn't made to do, supposed to do in its daily use. This is with every automated thing these days in general, but crashes happen. They aren't malfunction of the car, they are malfunction of the driver (if the car itself was functional and didn't have any malfunction from the factory state leading to this)
>and the car didn't break the law and acted as well as any human could be expected to act, then it is not at fault, and neither would a human driver. So nobody is responsible? Great logic and application there. The A.I can't make this decision because it can't be held responsible and thus has no rights. Until A.I gets a position in ethics and law, they can't make these decisions.
>>52579434 If the accident is caused by someone else, then it's his fault. If it's caused by a mistake madeentirely by the AI, then it's the fault of those who made the defective AI. Just like it happens with all dangerous machinery.
>>52575954 someone don't walk in the middle of a highway. And on streets where there is passengers, the self driving car respect the distances and speed limit, permitting him to stop before hitting the pedestrian.
>>52579434 >1. Crashes never happen. ?? >2. if AI crashes, it's malfunction so you can sue company instantly. are you talking about the AI program crashing? or car crashing? If the AI software itself crashes the computer, then the company is liable. Obviously the company will want to make this happen very rarely, and have insurance to be able to pay out for the rare times that it does. Computer crashes can be made incredibly unlikely when there is a need for it.
>they are malfunction of the driver but the driver in this case is an AI made by the car manufacturer. So they will be liable. I'm sure what you're trying to say.
>The A.I can't make this decision because it can't be held responsible and thus has no rights An AI is just a piece of software a company makes. If it fails to follow the law, then the company is sued. AI right have nothing to do with this at all.
>>52579434 >Because of your fucking ignorant assumption of >1. Crashes never happen. They do, but it's almost always 100% human error.
>2. if AI crashes, it's malfunction so you can sue company instantly. YOU can't sue the company, but the people that were harmed can. This isn't a problem of the person behind the wheel, it's a problem of the vehicle itself.
>The A.I would need to be able to handle crashes (they happen) but again it *can't*. Car A.I.s have been proven to be WAY more adept than humans in potential crash scenarios and avoiding them. The biggest problem that's going to exist between nobody having self-driving cars and everyone having them is going to be the idiots that think they know better and the time between everyone having them and nobody having them. Once everyone has them, the roads will become vastly safer because the cars will be able to directly talk to each other about maneuvers the others are doing.
>>52579500 >So then you'd be suing the 'driver' that 'did not drive' the car which gets laughed out of court. It's the company that made the AI than gets sued. Just like if an automated robot today killed someone by behaving unsafely, the company that made it would get sued.
>>52579500 Nobody is talking about suing the car, dumbass.
Look at this gif. It's an automated machine used to cut stone. Imagine if a bug caused it to act uncontrollably and it ended up cutting someone's head. What would happen then? We would find out THE PERSON who's responsible for that malfuncion and take them to court.
>>52579487 >If the A.I ever once crashes it was the manufacturer's fault. Jesus Christ don't you get that car crashes happen daily for various reasons? If you assume every crash by A.I. is malfunctioning A.I. then the idea is dead on arrival and you can't ever hope to have a A.I. driven cars.
The A.I. could crash 100% just like a human driver would crash and it'd still create a fucking huge black area in ethics and law. It is a non-entitiy of 1s and 0s that can't be held responsible because it has no rights or obligations.
>>52579518 >YOU can't sue the company, but the people that were harmed can. You don't sue the fucking car company when you crash a normal car. You sue the fucking person that was responsible. It is civil matter and you are mixing two fucking huge, different, aspects of law.
Sure if the car crashed because it had fucked up breaks straight from the company, you can then sue the company (after you've been sued by whoever you crashed with the car)
>Car A.I.s have been proven to be WAY more adept than humans in potential crash scenarios and avoiding them.
LITERALLY DOES NOT MATTER
HOW BINARY IS YOUR THINKING
1. It has no rights 2. It has no obligations 3. It can't be held responsible 4. Non-driver passanger can't be held responsible 5. In a car crash scenario (which happen) this would cause a massive problem+. 5A. CAR CRASHES HAPPEN DAILY AND THEY ARENT ALL BY MALFUNCTIONING CAR DESIGN WHICH YOU SEEM TO IMPLY
>>52579543 >Just like if an automated robot today killed someone by behaving unsafely, the company that made it would get sued. So this is your proposal? Sue the companies? That's a brilliant idea, change a normal occurance in modern driving to something you can instantly sue the company over.
>>52579580 >>52579580 >If the A.I ever once crashes it was the manufacturer's fault. I did not say that you complete moron. I said that in the rare case in which it's the AI causing the incident, and not other reasons, the manufacturer is held accountable for the mistake made by the machine.
>>52575954 I'll bet something like this could be made law:
If a self driving car get into a dangerous situation where people in the car and outside the car may get injured or killed in an accident, then people in the car should be protected at top priority. If the safety of people in the car can be assured beyond a reasonable doubt, then the safety of people outside the car is of top priority, but not at the expenses of the safety of people inside.
If something like this pic happens and the car kills 20 people to save everyone inside the car, then tough shit on them. The focus should be on preventing something like this happening in the first place.
>>52579621 >I said that in the rare case in which it's the AI causing the incident Your A.I. could be purely innocent of the incident, because you know, ACCIDENTS USUALLY HAVE INNOCENT SIDE BUT STILL NEED TO BE TRIED IN COURT OF LAW.
?!?!? TWO SIDES TO LEGAL MATTER THE DEFENDANT AND THE SUSPECT
>>52576387 If your car has to use ABS, it means you're using 100% of your traction for braking and the system is preventing you from going over that. Turning would require additional traction to perform the turn, taking from the car's ability to stop ABS or not.
This is why hard braking even with ABS invokes either a slide or understeer condition. You have to balance both and ABS is only going to prioritize one of them for you.
>>52579693 >would be big no no to companies? Why would it? Companies get sued every day. They wouldn't even need to be sued, they'd just have to pay for the damages they caused. They'd only need to go to court to dispute the damages. Companies won't care so long as they still make a profit in the end.
>Crash happens because of the AI >SUE THE COMPANY Makes total sense, and it;s what happens today.
>>52579580 You seem to be highly angry and have ignored the point of my post, which is:
Car A.I.s are highly unlikely to get into the situation as described in the OP, or any other crashing scenario. They are proven better drivers than humans at the absolute worst because they can react faster and have vastly more information at hand than a human would at any point during driving. If a car A.I. does end up in a crashing scenario, it is far more likely a fault of the car's mechanics failing than the A.I. failing to react.
Car crashes do happen everyday, which I agreed with you in my post, but you also didn't read that they are by far and away human error and very rarely mechanical failure. Remove human error from the problem and you suddenly remove 99.9% of crashing issues. The other .1% is removed by manufacturers being forced into tighter testing restrictions, and having the A.I. do self-checkups and giving the driver strong suggestions or outright taking you to a local mechanic so they can be fixed before they become a problem.
As a side note, I think you should take a rest from this board, it's clearly hurting your brain to think deeply about these problems.
>>52579658 How the fuck can you understand that we're talking about a specific situation in which the accident in caused entirely by an error of the AI? If the accident is caused by another car crashing into it, a brick falling on it, or a fucking spaceship shooting lasers at your self-driving car, then it won't be the car's fault, and the legal system will persecute whoever made the mistake that caused the accident.
IF the accident is ONLY caused by a bug in the AI (let's say failure to identify someone crossing the street and running him over), then it's the fault of whoever is responsible for the programming of said AI because it's a mistake in giving the machine the instructions to properly act in that situation.
The car is just a tool, and of course it's innocent, but if the car acts in an unpredictable manner and someone gets hurt, it's the fault of whoever got the car to act in such way in the first place.
At least in the U.S. pedestrians have always legally had the right of way. A self-driving car which adheres to all of the existing regulations would then need to take this into consideration, or the regulations re-written to give them the right of way instead.
That is why this is in debate. The "greater good" is really what human drivers are expected to follow legally, but we all know that's in conflict with our own priorities. Now with self-driving cars we might not have that control anymore, and that is hard to accept.
>>52579749 >You don't sue the fucking factory that made the car when driver crashes the car. of course you do if the factory made the driver.
>Or you could pass that as retarded law and then have car companies vanish from your country. People will be happy to pay more money for self-driving cars. Insurance to cover accident payouts will be a small portion of the car cost.
This is funny. The obvious action (stopping the car) is not shown as an option. You learn that stuff when you are in middle school. If I let go of an object, it will hit the ground. That is a pretty confident prediction. You do not need many samples to know the direction and speed of a moving object, so predicting where a person will be is not that hard.
But we don't even need to be this precise. The cars are already equipped with sensors that can detect the distance to an object and just having an object in the way means the car should either stop or slow down. They already does this when there is a car in the way. On a highway, the car will match the speed of the next car, avoiding a collision. In the city, it will stop if there is people in the way. We don't need to think about these situations as driving is not a super complex task that can't be solved the next few years.
And as for blame when there is an accident? Why not keep the rules the same as they are, the owner of the car must ensure it is driven by someone who has a license (human) or the owner will be to blame and if someone is sitting in the car, that person will be to blame for not stopping the car or taking it to service or whatever.
This has never been a problem for humans, why would it be a problem for a computer?
>>52576399 No, asstard, the situation will happen. If it has a 1/100,000,000,000 chance of happening, then in 50 years whem self driving cars are the norm, it will happen once a month or less worldwide.
This question is intended to address situational ethics, and how a machine should evaluate human value.
Personally, I think that if the car is in a situation where either the owner or a bystander is killed, it should choose the owner.
Risk of driving, bitch. People die every day, you're not special. If you die, no matter how, we will move on fine without you and quickly forget that you ever existed.
>>52579745 >Car A.I.s are highly unlikely- You can't vouch for that. You can't *know that*. And it doesn't matter, it is the argument in the OP, it is there, we talk about it. And car crashes happen daily. Someone is responsible, someone is innocent, or maybe both are responsible. But because of the nature of A.I. it has no rights or obligations and can't be held responsible (duh) it can't be tried in court of law.
>They are proven better drivers than humans at the absolute worst because they can react faster and have vastly more information at hand than a human would at any point during driving. Doesn't matter. They still can't be held responsible because they have no rights. They cannot be part of any legal system as own entity, like humans can.
>Remove human error from the problem and you suddenly remove 99.9% of crashing issues Remove human error and you don't have anyone driving the car because if you have no rights, you have no obligations, and can't drive a car. Because to drive a car, you must be 18 years old, have drivers licence, be without any warnings in that time etc.
>How the fuck can you understand that we're talking about a specific situation in which the accident in caused entirely by an error of the AI? What error? OP proposes a question, how should A.I. make decision in how to react in a crash, and my answer is it can't. It can make exact same choice as human and it still won't change the issue.
>IF the accident is ONLY caused by a bug in the AI (let's say failure to identify someone crossing the street and running him over), then it's the fault of whoever is responsible for the programming of said AI because it's a mistake in giving the machine the instructions to properly act in that situation. Yes I'm 100% sure suing car company for daily, common occurances, won't bring anyone bankrupt or sway away from the said market with this law. Not a sustainable solution
- You weren't driving too fast for conditions - You weren't driving faster than the posted speed limit - Your judgement was not impaired - You did make reasonable attempts to avoid hitting the pedestrian
Even if you think you hit all those points on the nose, if the pedestrian attempted to sue you they might very well win so long as they can prove they were not impaired.
>>52579833 Right but if a car was in the Situation to decide wether it kill someone or not. Lets say a Google car. Should it be allowed to decide based in the Google score of the Person? While this question seems easy to abswer what about a bing car? or a yahoo car. If a car can decide over your future it should rather be a fucking Transformers bro
>>52579861 You are assuming several types of sensors fail, breaks fail, the car is invisible and yet it can still change direction of the car. This is very unlikely and even if it happened, the car should "evaluate the value of human life", it should just stop or go straight. If the big group of people saw this car comming at them they would jump to the side as most humans have a desire to not die. If the car swirls to the side, it will hit all the people who got out of the way.
>>52579927 >You're about to get roasted by a semi >In the last 2 seconds before getting plastered, over the sound of blaring truck horn you hear a ding as the car hands over manual control >It takes you one second to realize you have to react >By the time you make a decision it is already too late.
>>52576365 man i would love to have a car that can pull several G >>52576038 air brakes don't exsist. they're designed to fail >but what if tire will get punctured run-flats are pretty common >but what if software will decide to go for WOT ignition is cut the moment you press brake pedal (at least it should be) >but what if ECU will fail there's another one to take over tasks
>>52576053 philosophy majors love wasting on "fucking hypothetical" questions with no connections to real world. Now could you fucking tell me what happened to my order? i'm waiting for like 50 minutes
>>52580024 Your regulation choice was to sue companies for something that happens hundreds of thousands of times a day and then flood the court with person vs. company cases, while driving down the companies that manufacture cars.
Simple solution is to have the entire fucking thing disabled until we generate A.I. brilliant enough to human mind
>>52580020 >Yeah, which is why it is worth investing in a dash cam like that guy did so you can prove when someone did something foolish or attempted insurance fraud. I meant in a situation where is perfectly clear what happened, not in a case where the driver is wrongly accused of misconduct because there is no proof of the opposite. Would I still risk trouble if the whole thing was filmed?
>>52579887 Stopping the car is still the only option. If the car is unable to stop fast enough to avoid killing people, it should not be on the road.
If we are talking about >>52579973 then the car should still stop. It would be a bit more dangerous as there is no person inside the car who can supply aid to the guy who got hit but if the car could call 911 with an automated message, that would be better than 90% of people anyway
You never implied it was filmed, but you would be in significantly less trouble if you had it on film. When you *don't* have it on film, it gives the pedestrian and their lawyer room to dramatize the whole situation. Without hard evidence on your part, it makes it hard to confirm otherwise.
Sometimes you luck out and some witnesses blame it on the pedestrian for you, as well.
>>52580056 >Your regulation choice was to sue companies for something that happens hundreds of thousands of times a day No, I didn't say that. I said sue the company if the accident has been caused by the car's AI. If somebody jumps in front of my self-driving car like in >>52579973 then it's not the company's fault. If the car decided to disable the brakes and someone ends up dead, then it's the company's fault for having programmed a defective car that caused the accident. If the accident hasn't been caused ba a decision made by the AI, then it's not the company's fault.
>>52580092 >You never implied it was filmed Sorry, my fault.
>but you would be in significantly less trouble if you had it on film. When you *don't* have it on film, it gives the pedestrian and their lawyer room to dramatize the whole situation. Without hard evidence on your part, it makes it hard to confirm otherwise. >Sometimes you luck out and some witnesses blame it on the pedestrian for you, as well.
I see... But in case you had it on film what would happen? Would they just close the case right away, or you would still risk something?
>>52580088 Greater good is utilitarianism argument which is a very steep slippery slope.
>>52580096 >I said sue the company if the accident has been caused by the car's AI. Car crashes happen daily. In droves. I can't even guestimate, but probably in millions? That'd flood the car company, they couldn't handle that amount of sues, and neither would any court (and this would go through court, since it's civil person and rights related to him)
You'd practically kill the governments problem solution way and the car company. >If the accident hasn't been caused ba a decision made by the AI, then it's not the company's fault. ? Then nobody is at fault? Good lord, you could just crash around freely and collect insurance money.
>>52580116 Yeah congratz it can do fast math, doesn't pass off as a human being though. Doesn't have rights or obligations.
>>52576021 >Self-driving cars are orders of magnitude more safe than people-driven cars. This doesnt matter though. People will care a lot more about randomly being killed by a big metal box on wheels they have no control over when they're being told its completely safe than some retard who falls asleep at the wheel. It's not good enough for them to be statistically safer.
>>52580146 >Car crashes happen daily. In droves. I can't even guestimate, but probably in millions? That'd flood the car company, they couldn't handle that amount of sues, and neither would any court (and this would go through court, since it's civil person and rights related to him) >You'd practically kill the governments problem solution way and the car company. This depends on what kind of regulation will be in place. Also it wouldn't take very long for people to understand that these cars record everything and that if the car isn't at fault it would be a matter of seconds to prove.
>? Then nobody is at fault? Good lord, you could just crash around freely and collect insurance money. Now you're just trolling. Did I say it's nobody's fault? It's obvious that if the car didn't cause the accident it's not the car's fault, BECAUSE IT'S SOMEONE ELSE'S. It would be the fault of whoever caused the accident. Are you retarded son?
>Yeah congratz it can do fast math, doesn't pass off as a human being though. Doesn't have rights or obligations. And you plan to give rights and obligations to machines one they're advanced enough?
>>52580247 >It would be the fault of whoever caused the accident. A.I. doesn't have rights or obligations, can't be responsible. Passanger was a passanger, can't be responsible. Making a law which makes company responsible for something that happens in hundreds of thousands a day would gas the governmental problem solution system and drive down car companies (plus, CRASHES do happen and unless the car was legitimately functional, it has never been the company's fault)
>And you plan to give rights and obligations to machines one they're advanced enough? If in some near future A.I. becomes advanced enough, to compare to human mind in its complexity. One that can develop it's own moral codes, think on its own, he can understand an image of himself, he can understand himself, and he can see himself, etc. million other tests then yes. What else?
>Also it wouldn't take very long for people to understand that these cars record everything and that if the car isn't at fault it would be a matter of seconds to prove. And it'd still cause the vacuum in legality and ethics. It's not about if its better. That literally does not matter
The car makes the attempt to save as much life as it can, but the value of the passenger's life is above all others. If the passengers must die to save everyone, then someone other than the passengers are dying, whichever situation gives the least deaths.
Or would YOU throw your life away in a situation like this? No, you wouldn't slam your car into a wall to save other people. Of course you're going to TRY not to run over other people, but if even with your best attempts you send a couple of niggers airborne, so be it.
>>52580333 >A.I. doesn't have rights or obligations, can't be responsible. I KNOW YOU RETARD. Stop making this argument. I didn't say that the car is held responsible. I said that the person who is responsible for the car's behavior (AKA, the manufacturer) is the responsible one.
>Making a law which makes company responsible for something that happens in hundreds of thousands a day would gas the governmental problem solution system and drive down car companies (plus, CRASHES do happen and unless the car was legitimately functional, it has never been the company's fault) But it doesn't happen thousands of times a day. You're counting all accidents, but you have to take out those not caused by the AI's decisions. The accident caused by AI's decisions would be so rare that it's really not an issue at all.
>>52580333 >If in some near future A.I. becomes advanced enough, to compare to human mind in its complexity. One that can develop it's own moral codes, think on its own, he can understand an image of himself, he can understand himself, and he can see himself, etc. million other tests then yes. What else? If we make humanoid robots that can be exactly like us, then they will behave like us and depending on their place in society they might have rights and obligations, yes. But this isn't what we're talking about. We're talking about simple machinery here, and just like every piece of dangerous automated machinery that's currently being in use, we can't postpone their implementation until we have perfect AI. Especially since their lack of implementation would just make them only be in laboratories, and therefore there would be no incentive to finance their development, effectively stopping their progress.
We're talking about normal AI here, and I don't se how a car's AI would be different that that in a self-driving subway train like those that already exist in many cities.
>And it'd still cause the vacuum in legality and ethics. It's not about if its better. That literally does not matter Why? Once people will realise that suing a perfectly-driving automatic car for the accident that they caused is completely useless, nobody (except a few literal retards) will do it. The "flooding the government with lawsuits" is a problem that will fix itself.
>>52580490 >I said that the person who is responsible for the car's behavior (AKA, the manufacturer) is the responsible one. Yes flood the legal system in complaints, flood the company with requests, drive down both systems.
>But it doesn't happen thousands of times a day. Car crashes probably happen a fuck ton of times a day and even currently most legal systems around Europe for example are slow.
>You're counting all accidents, but you have to take out those not caused by the AI's decisions. What? If it wasn't a self driving A.I. that caused it, it was malfunction (and another request) or antoher A.I. driver (another civil vs. comp. case, another legal case, another request for company)
How thick are you
>we are talkinga bout normal ai which cant do these tasks because it has no rights or obligations to do them.
>>52580540 It hasn't fixed itself yet and this is without hundrends of thousands of cases of civil vs. company caused by crashes A.I. calculated.
>Once people will realise that suing a perfectly-driving automatic car for the accident that they caused Who are 'they'? What are you speaking of even?
>>52580490 >I don't se how a car's AI would be different that that in a self-driving subway train like those that already exist in many cities. Except they're not self-driving, not at least here, everytime someone is trying to kill themselves the breaks come from the driver.
>>52580545 >Yes flood the legal system in complaints, flood the company with requests, drive down both systems. This won't happen for the reasons I talked about here >>52580540 and in the rest of this post...
>>52580545 >Car crashes probably happen a fuck ton of times a day and even currently most legal systems around Europe for example are slow. Total car crashes don't matter here. We're talking about who would be held accountable for a crash caused by the AI's decision. Every other crash type is irrelevant because it would just be solved normally, as it would be the direct result of a man-made decision.
>What? If it wasn't a self driving A.I. that caused it, it was malfunction or antoher A.I. driver No you fucking moron. It's the opposite. If the accident is caused by the mistake that the AI made, then it's malfunction. If the AI drove perfectly and an accident happened, then it's something else depending on the accident. Could be a drunk driver going too fast, could be a bridge falling on top of it, could be a jaywalker like here >>52579973, etc. As long as the self-driving car drove perfectly (and they always do), no accident will occur unless something that doesn't depend on the AI happens (like one of the examples I made).
>which cant do these tasks because it has no rights or obligations to do them. Pic related. Of course it can do it, because the countless tests proved so, and in the rare case in which a bug caused them to act unpredictably, they will be seen exactly like every other dangerous automated machine (Which, guess what? they lack rights and obligations too, and yet they have been used for decades).
>>52575954 I'd like to add that crashing into the pedestrians is more dangerous to them than slamming into the wall is to the passengers: the pedestrians get slammed by a car but the passengers have seatbelts, airbags, crumple zones, etc
>>52580607 >It hasn't fixed itself yet and this is without hundrends of thousands of cases of civil vs. company caused by crashes A.I. calculated. Does that really happen that much? do you have a source for that claim?
>Who are 'they'? What are you speaking of even? The people who made the mistake which caused the accident that happened despite your self-driving car driving perfectly.
>Except they're not self-driving, not at least here, everytime someone is trying to kill themselves the breaks come from the driver. You don't know what you're talking about. Some are literally driverless. There is no operator or driver inside. Just a big window at the front. Here is the one in Turin: https://www.youtube.com/watch?v=chyr0dxTdbc
>>52580750 Yes it will happen. At some point A.I. that just does quick maths will have to crash the car and then it will have to be tried. And it can't be tried. And crashes happen daily, expecting a company to pay for something that is hourly occurrence in the world is crazy. And this would again flood the legal system to shit (and at least in Europe, it already is pretty fucking slow, East Bloc gets notifications from various human rights bodies about their slow legal system).
>If the accident is caused by the mistake that the AI made, then it's malfunction. It's not a mistake, mistake is drunk driving. Crashing is something that you don't even need to do mistake, YOU CAN GET CRASHED MORON. And you can't define 'crash' as 'malfunction', that's just ridiculous. Malfunction of device is not working properly and is grounds for sue. But something dependent on the driver can't be such.
>If the AI drove perfectly and an accident happened, then it's something else depending on the accident. Yes it's another A.I. clearly crashing into you.
>>52580818 >Does that really happen that much? do you have a source for that claim? Does it really matter that the way people solve problems gets flooded? IDK, legal system is pretty important at least here in Europe.
>Who are 'they'? What are you speaking of even? >The people who made the mistake which caused the accident that happened despite your self-driving car driving perfectly. You can't assume it will be perfect. It was literally man made and that's already a flaw in your argument.
>>52575954 Since people are never taught this shit before they get their license, they are just as liable as the self driving car to these circumstances. And yet people are still allowed to drive. In these cases, the circumstances are set before court to decide and it can be the same for a self driving car. If the car behave terribly, the company who produce them will need to fix them and pay damages. If the person was retarded then no damages would be needed. The self driving car has enough sensors on board to easily allow a judge to make a decision.
>>52580848 >Yes it will happen. At some point A.I. that just does quick maths will have to crash the car and then it will have to be tried. And it can't be tried. And crashes happen daily, expecting a company to pay for something that is hourly occurrence in the world is crazy. And this would again flood the legal system to shit (and at least in Europe, it already is pretty fucking slow, East Bloc gets notifications from various human rights bodies about their slow legal system). Are you just ignoring the numerous points I keep making to those retarded arguments you keep repeating?
>It's not a mistake, mistake is drunk driving. You're arguing about semantics now. If AI takes a decision and it's not a good decision, then it's a mistake made by AI. Even if you don't agree with this definition it doesn't matter. You still (hopefully) understand what I mean.
>Crashing is something that you don't even need to do mistake What?
>YOU CAN GET CRASHED MORON Yes, and it will be the fault of the person who caused the crash, which in this case is likely to be the driver that crashed into you.
>And you can't define 'crash' as 'malfunction', that's just ridiculous. Yes if it happens because of a bug in the AI. If the car had a bug that caused it to suddenly decide to accelerate to its max speed and disabled the brakes, then the resulting crash would be the result of a malfunction of the AI.
>Malfunction of device is not working properly and is grounds for sue. Exactly. With this sentence you just made your entire argument invalid, since you just admitted that an accident caused by the AI not working properly is grounds for a lawsuit.
>But something dependent on the driver can't be such. A decision made by the artificial intelligence fo the car is dpendent on the driver?
>Does it really matter that the way people solve problems gets flooded? Nice dodging of the question. I take that as an admission that you just lied.
>>52580848 >You can't assume it will be perfect. It was literally man made and that's already a flaw in your argument. I didn't say that self-driving cars are perfect you colossal retard. I said that IN MY EXAMPLE WHERE THERE IS A CRASH WHEN THE CAR MADE NO MISTAKES, the fault is someone else's because the car drove perfectly IN THAT OCCASION. Never said it will never make any mistake, as that would be retarded, since we're basically only arguing about what would happen in case a car causes the accident.
>>52581065 >If AI takes a decision and it's not a good decision, then it's a mistake made by AI. There's no 'good' decision. It's either kill, kill or kill.
>Yes, and it will be the fault of the person who caused the crash, which in this case is likely to be the driver that crashed into you. by anohter car driven by A.I.
>Crashing is something that you don't even need to do mistake >What? Are you stupid? Do you think it takes two guilty parties to cause a car crash? Are you a fucking moron? It can be, but it can also be innocent and guilty parties.
>With this sentence you just made your entire argument invalid, since you just admitted that an accident caused by the AI not working properly is grounds for a lawsuit. False assumption, you assume AI is perfect which it is not and will never crash. Protip: it will.
>I take that as an admission that you just lied. You can go and check out EC's complaints about legal systems of East Bloc but that's another issue on itself. They're slow as hell.
>>52581080 >I didn't say that self-driving cars are perfect you colossal retard. Yes you said.
Crash is a malfunction by A.I. according to you here, I quote > since you just admitted that an accident caused by the AI not working properly is grounds for a lawsuit. AI IS PERFECT ANY STRAY FROM PERFECT IS MALFUNCTION
Which is not true.
since you don't fucking sue people for cars crashing.
A and C are correct. They save the most people. B is probably incorrect. It depends on how many people are in the car.
Ideally it would take into account other factors, e.g. how much good the passengers and the pedestrians do for the world and how much incidental harm would be caused by killing them. But we can't do that until we have nearly human AI. That sort of system would also open up such a huge political can of worms that self driving cars would never get on the road.
In practice, it doesn't matter. They'll be so much safer that getting them on the roads sooner is more important than having them be perfect. Satisfy the selfish morons by having it always prioritize the passengers and it will STILL save lives.
>>52581204 How else are you going to deal with fringe cases in morality when everything is set up to only be dealt with with utilitarianism? You think having to be a but utilitarian when programming a self driving car is going to stop people?
>>52581177 Do you not realize that having cars like that makes it more likely YOU will die when someone else's car decides to drive through a crowd to avoid hitting a pothole? If other cars also try to just prioritize their drivers, they will make no effort to protect you when you are a passenger or a driver. Your ideas would hurt you considerably more than they help you. Not only are your preferences selfish, they aren't even in your best interest because they would only wind up hurting you.
If you're going to be selfish, do it RIGHT. Be selfish in ways that actually help you. Being selfish in ways that hurt you is just stupid.
But, as I said, it doesn't matter. I'll gladly let you people be evil morons since it will save more lives. Self driving cars will be so safe that shipping a sub-optimal morality system would be better than delaying them a year.
>>52581128 >There's no 'good' decision. It's either kill, kill or kill. What the fuck are you talking about?
>by anohter car driven by A.I. If both the cars involved in the crash are self-driven then one or both have made a mistake for it to happen. If they didn't they wouldn't have made the crash. Once it's established which car caused the accident, then who programmed the buggy AI gets sued.
>Are you stupid? Do you think it takes two guilty parties to cause a car crash? Are you a fucking moron? It can be, but it can also be innocent and guilty parties. I literally didn't understand the meaning of the sentence. Now I understand and respond with: It doesn't matter if you get crashed into, because you (or your car, its manufacturer, etc) won't be held responsible. Whoever caused the other car to crash into you will be held responsible.
>False assumption, you assume AI is perfect which it is not and will never crash. Protip: it will. I didn't say that, you complete idiot. I explicitly said the opposite. I suspected you were trolling, but now it's clear.
>You can go and check out EC's complaints about legal systems of East Bloc but that's another issue on itself. They're slow as hell. Are you just strawmanning on purpose? I didn't ask you that you colossal inbred moron. I asked you to prove that countless of accidents happen with self-driving vehicles.
>Yes you said. I explained what I meant in the next sentence. It's just your reading comprehension that makes you beliee I said it while I clearly didn't.
>>52581128 >Crash is a malfunction by A.I. according to you here, I quote >> since you just admitted that an accident caused by the AI not working properly is grounds for a lawsuit. >AI IS PERFECT >ANY STRAY FROM PERFECT IS MALFUNCTION WHAAAT? That doesn't even make any sense. How do you reach such conclusions by reading something completely different? Your mental gymnastics doesn't work very well.
>>52581245 Nobody sits there thinking about shit like this except fedora tipping neckbeards. People move on with their lives. You think the people who had to hurt 1 person but helps 10 really cares if they are going down the path of utilitarianism or do you think they are called a hero because they made a hard decision that helps 10 people.
When situations like this happens to normal people. They make a call and then they stand trial for their decision. They don't sit there arguing about philosophical ideology. In situations like this, if in court, you said you did it to try to save as many people as possible, you will not be put at fault. Therefor utilitarianism is perfectly fine for situations like this.
Am I the only one who wants the car to sacrifice me as a passenger?
I realize there are inherent risks to driving a car, and if something goes wrong I would rather that be my responsibility, the person who chose to ride in the car, rather than the innocent bystanders who happen to be walking outside. Technology like this comes at a price, and we should expect people who aren't interested in it to pay for it, especially when they might have to pay with their lives.
>>52581183 If a self-driving vehicle caused a crash it means it malfunctioned. If it didn't malfunction, then it wouldn't have caused the crash.
Therefore you can sue the company for making the malfunctioning AI that caused the crash.
If on the other hand you get crashed into, then it's the fault of whoever was driving the other car. If it was a human, then sue the human, and if it was the AI, then sue the maker of that AI because it clearly malfunctioned enough to make the car crash into yours.
>>52581307 >If both the cars involved in the crash are self-driven then one or both have made a mistake for it to happen. So lost in the bubble thought of Perfect A.I designed by inherently flawed creatures. It's really beautiful showcase of ignorance.
>How do you reach such conclusions by reading something completely different? You have time and time said that any crash by Perfect A.I. is a 'malfunction' and not just a mistake by the driver. Perfect A.I. don't crash obviously.
>>52581333 >When situations like this happens to normal people. They make a call and then they stand trial for their decision Now you are suing people for mistake AI does? but they were just a passenger and weren't driving, the A.I. was.
>>52581245 >It's such a weak moral code that you can justify just about anything with it. Bullshit. You can RATIONALIZE just about anything, but that's always true. If you're the sort of person who is prone to rationalizing your behavior, you will rationalize your behavior no matter what moral code you follow. Irrational people will be irrational no matter what moral code they follow.
In the hands of a rational person, utilitarianism makes it very clear what is moral.
>>52581181 they are all incorrect. If you have ever driven a car you would know this. If you estimate how many people you will hurt you can easily make mistakes. stopping the car would be the only option. Even if it means hitting all the people in front of them, that is the action the car should do. Slowing down means the people will suffer minimum damage. But it will never be a problem because people never just cross the road in large groups in places where you drive really fast.
Assuming the car is in the right, because why wouldn't it be? It's a robot, it can't make its own rules, then why should 1-8 people die because of the stupidity of 1-500 retards who were either to stupid to understand their surroundings or stupid enough to try make make it anyway.
Mow the retards down and pull over, don't slam 1-8 passengers into death for them.
>>52581259 >The rest of us realize that no one is fucking special. The President is not INHERENTLY more important than anyone else. However, killing the President would have side effects. Killing the President would result in many other people dying in the instability that follows.
All lives are equally worth protecting. No one is special. So, in order to protect as many equally important lives as possible, the President's safety is more important than mine.
>>52581383 >Now you are suing people for mistake AI does? but they were just a passenger and weren't driving, the A.I. was. You realize the AI isn't a real AI and is just programmed to drive a car right and the people who program it would be responsible. If you were talking about a REAL AI then you'd have to put the AI on trail because it is able to make logical and moral decision by it self.
>>52581438 >You realize the AI isn't a real AI and is just programmed to drive a car right and the people who program it would be responsible. so you just made every car manufacturer steer away from it
>>52579674 I can tell you why your theory is bullshit, I was driving down a street in Columbus called Champion Avenue, they recently removed the stoplight at the intersection of Champion and Mooberry, so the traffic on Champion heading north(one way street) had the right of way, while the traffic on Mooberry heading east(one way) had a stop sign and below the stop sign said (Cross traffic does not stop) never the less a guy thought that I had to stop, forcing me to slam on my breaks, he stopped in the middle of the intersection and because I was able to steer around his car even while braking I avoided a collision, if I had just braked without steering I would have hit him, just because you're an idiot who doesn't know or doesn't think something works doesn't mean you can spew your false nonsense all over the place.
>>52581383 >So lost in the bubble thought of Perfect A.I designed by inherently flawed creatures. It's really beautiful showcase of ignorance. You really are retarded. I DID NOT FUCKING SAY THAT THEY ARE PERFECT, AND CERTAINLY DO NOT THINK SO, so stop using this as an argument. If I thought that AIs were perfect, then I would say that NO ACIDENTS WHATSOEVER could be made by AIs, but since this is not the case, I'm talking about a case in which the AI has a bug (since it's not fucking perfect) that takes it from its normal well-working state (where no accidents happen) to a bad-working state where accidents happen. This whole thing about bugs and mistakes is what I mean by AI not being perfect. When I talk about perfectly-driving AI, I'm obviously talking about when it doesn't have a bug. I'm not excluding that some times there might be bugs.
>You have time and time said that any crash by Perfect A.I. is a 'malfunction' and not just a mistake by the driver. Perfect A.I. don't crash obviously. I didn't say that to not make the car crash you need a perfect AI. I said that you need a properly working AI. Said properly-working AI might have a bug one day and that might make it crash. And since for a crash to happen you need a bug (which is a malfunction), you can sue the maker, because if it worked properly it wouldn't have crashed. Perfect AIs will likely never exist. We're not talking about them. We're talking about properly-working ones that don't crash until they have a bug that makes them crash.
>Now you are suing people for mistake AI does? but they were just a passenger and weren't driving, the A.I. was. See? You're trolling. I explicitly said that you sue the maker of the AI that made the crash, not the passenger.
It should brake and turn away from the crowd in A and C. B I'd say just plow the guy. Think about what a real person would do. If you were confronted with the situation, would you plow through a crowd or stop your car with a wall?
>>52581433 No it would not. The whole reason we have a structured government is because it makes it so that there are many people who could take the president's place if something were to happen to him. The president is replaceable and you are doing yourself a disservice by saying his safety is more important than yours.
>>52581458 No he isn't. The president is just as likely as the rest of us to die each year from getting cancer or some other stupid shit. Don't forget the time Bush almost was killed by a pretzel.
>>52581435 If there is a convention, more than a group of people would be around. Crossing the road is a lot easier at the intersections, so people will do that. People will not jump in front of a car as a group
>>52581423 If you are rational and actually follow utilitarianism, it makes it clear how to act. Irrational people who follow utilitarianism in name only will behave immorally, but that is not a problem with utilitarianism.
Irrational people can use utilitarianism to rationalize just about anything, but they can do the same with deontology or virtue ethics.
>>52581576 >>52581576 >>Every time AI car crashes it's malfunciton to you. Thus it's not working as intended, thus accidents never happen, thus its perfect. What? A car programmed to not crash is perfect? If crashing equals malfunction, it's because the car is programmed to not crash. Accidents caused by AI never happen unless there is a bug, which is a situation that will likely exist in the future BECAUSE they are not perfect.
Are you trying to convince me that what you said is my opinion?
Also you're clearly baiting, and I'm the retard that keeps responding anyway. Fuck me, but must importantly fuck you.
Everything I would say answering this moronic post would just be a repetition of what I already said before, so go read those instead, because clearly you're not understanding what I'm writing (or you're pretending not to).
Congratulations! I got baited in your bullshit! I'm sure you're happy about it.
>>52581747 I literally told you that your argument is invalid because it's not responding to what I said, bot to another thing that I didn't say. I have nothing else to add, until you re-read my posts and answer to my arguments, instead of arguing against me allegedly telling you that AIs are perfect (which I explicitly said many times that I do not think so).
Also you're not even able to link cross-board. lol
>>52581671 no they don't. Poor countries have less traffic, so you can cross the road without getting hit by a car. This can clearly only happen in a city, people don't usually walk across a highway and there is not that many people in rural areas.
And people either come out of nowhere and there is no time to react or there is time to react and the car will stop. I have driven a car for many years, and I have never been in a situation where I had to choose between sacrificing myself or other people. I have never run a person over. Accidents happen because the road is wet or ice, the car doesn't work the driver is drunk or on drugs. the driver is not paying attention to the road
The last two situations is the most likely and a automated car will fix that. If the road is wet or iced over, the overall speed limit should be lower for the automated car, just as it is required if a person would drive.
Mechanical failures in the car is something that usually is avoided by servicing the cars. And if there is a mechanical failure, why would the car turn?
>>52581869 >Poor countries have less traffic, so you can cross the road without getting hit by a car. We're not talking about Uganda here, you moron. Go look at Eastern Europe, SE Asia, Middle East, Latin America, southern Italy etc.
>This can clearly only happen in a city, people don't usually walk across a highway and there is not that many people in rural areas. Where did I say anything about rural areas? Only rich countries have cities? Poor countries only have rural areas?
>And people either come out of nowhere and there is no time to react or there is time to react and the car will stop. >I have driven a car for many years, and I have never been in a situation where I had to choose between sacrificing myself or other people. >I have never run a person over. >Accidents happen because >the road is wet or ice, >the car doesn't work >the driver is drunk or on drugs. >the driver is not paying attention to the road That's in civilized countries. You clearly don't know anything about the rest of the world.
>The last two situations is the most likely and a automated car will fix that. In poorer countries people do stupid shit like >>52579973 all the time. Seriously, go look at car accidents on LiveLeak. Many of them are just random shitskins being retarded.
>Mechanical failures in the car is something that usually is avoided by servicing the cars. Unless you servide the car every day, failures are bound to happen
>And if there is a mechanical failure, why would the car turn? Are you talking about the car's inability to turn because a mechanical failure prevents it to? The steering system isn't the only thing that can break.
>>52581881 He's right. You're just strawmanning at this point.
>>52581997 >That's in civilized countries. We are talking about robot driven cars. If we are talking about a country that doesn't have the advanced technology of intersections, they wouldn't have self driving cars.
Rome has intersections. Places where people can cross the road safely. Assuming a self driving car is not programmed to break the law, even a lamborghini will be able to stop without hitting people. Why do you think humans are able to drive cars without this being an issue? Do you see a lot of people driving off the road in order to not hit a group of people? Even a single person?
In sweeden you might have to avoid a moose on an icy road with a high speed but in cities this is not a problem.
>>52582138 Micro processors can calculate in nano seconds and the speed of sound is 340m/s. The car can react and emit signal to the environment way faster than it could stop. Your assumed that the other parties are stationary and can not react to the situation.
>>52582202 You're moving the goalpost now. The argument was about people doing stupid shit all the time like not crossing the road at an intersection. Rome is full of people who jaywalk like it's GTA, therefore creating a problem even if there are intersections.
Again, not everyone is civilized, and some places have more uncivilized paesants than others.
>>52582237 The car could emit the signal at the speed of light itself, it doesn't change the fact that in that situation, if it's impossible for you to stop or slow down, it will also be impossible for whoever is in your way to react to the signal.
Also, if you're about to get crashed into by a car, and you need to be notified by a horn to get yourself out of danger, then it means that you're probably not aware of the danger, and it will take you some time to react to the horn, then realise that you're in danger, decide what to do, and then try to get yourself out of the situation. Nobody is that quick.
>>52582318 The average human reaction time to audio stimulus is 0.17s which is and order of magnitude faster then the time it takes a car to come to a complete stop from a initial velocity of a 100km/hr
>>52582444 That time is just their reaction to the stimulus. It was probably measured by telling people to push a button when they heard a sound that they expected to hear. It's way different than minding your own business, hearing a signal, realising in what danger you're in, and then react accordingly. Even just the time to jump out of the way is a lot in that situation.
>>52582259 But having an intersection means people CAN cross the road safely. If there is a lot of traffic, there won't be a lot of room between cars and people will not cross the road. If there is room between cars, a car can slow down and stop before hitting people. In the situation of >>52579973 the car would not be able to react, no reason to turn, no opportunity to sound the horn before the hit but the car should still stop after the accident in order to stop other people from getting hurt. If people jump in front of a car, the car should not kill the driver. that would be insane.
>>52582502 >But having an intersection means people CAN cross the road safely. Where do you live? Is the thought of uncivilized people so unfathomable to you? Is it so hard to understand that jaywalking is actually common in some areas where the general culture of the people brings them to not care about the rules?
>If there is a lot of traffic, there won't be a lot of room between cars and people will not cross the road. Yes there will. People just wait for the right moment and run to the opposite side.
>If there is room between cars, a car can slow down and stop before hitting people. This depends on how much room and how fast the car is going. In many situations i's impossible for the car to stop, especially since the driver might not notice the pedestrian until it's too late.
>In the situation of >>52579973 the car would not be able to react, no reason to turn, no opportunity to sound the horn before the hit but the car should still stop after the accident in order to stop other people from getting hurt. In that particular one, no. In many other similar ones, yes. Definitely. Go watch some videos of pedestrians getting killed by cars on liveleak.com You'll find that there is a multitude of different situations, and the world isn't black and white.
>If people jump in front of a car, the car should not kill the driver. that would be insane. I agree with this.
>>52582602 >Yes there will. People just wait for the right moment and run to the opposite side. kek. Stopping / slowing down is still the best course of action. going to the side might bring more people in danger.
>>52582768 Not if the pedestrian in jaywalking and suddenly comes out of the parked cars, like the webm previously posted in the thread. Again, you keep thinking that situations like this can't happen because you're too stupid to comprehend that the world is complex and there are many variables in effect.
>>52583022 >>52583035 1- I didn't say it had to be them. Just that juxury vehicles exist in poor countries too. 2- Do you seriously think some Somali pirate understand how a self-driving car works? Why wouldn't he want to buy one if he doesn't know why it wouldn't work?
>>52583062 >Why wouldn't he want to buy one if he doesn't know why it wouldn't work? With 'anyone' I was referring to companies too Why would they bother with mapping the whole country just so 5 stupid pirates can use a self-driving car
>>52583115 1- Self-driving cars don't only work with mapped roads. 2- GPS also works in those countries. 3- Just because it wouldn't work it doesn't mean they're not morons enough to buy it anyway. 4- Pic related is the capital od Somalia. You think there are no maps for that place?
All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the shown content originated from that site. This means that 4Archive shows their content, archived. If you need information for a Poster - contact them.
If a post contains personal/copyrighted/illegal content, then use the post's [Report] link! If a post is not removed within 24h contact me at firstname.lastname@example.org with the post's information.