On the topic of driverless cars, a debate emerged regarding a car which would, in an unavoidable one set of people die or another set of people die scenario, will opt for the least net loss in human life, including the possibility of killing the cars occupant.
One side argues that the car would be a huge success as, other than in finitely many scenarios, the car would be less likely to be involved in accidents involving human injury.
The other side posits that the car would not sell well at all as the existence of a pre-programmed scenario in which the car chooses to kill the occupant is not a desirable selling point of the vehicle.
I'm shitposting this here as /his/ is filled with "religion is a mental illness" threads. What do you think of the situation.
It depends on what the car is programmed to do. Keep in mind that it is extremely hard for a robot to recognize people in the first place.
There is no /phil/ board and religion is a mental illness
The specific instance was "you're in the car alone when two people walk into the road. Without adequate stopping distance and being unable to maneuver out of the path of the pedestrians without killing the driver, the car chooses to ram itself into a wall, killing the driver, to ensure the least net loss of human life."
Basically, high school level "deep" philosophy. I'm ashamed to be part of the debate at this point but I'll be damned if I'm backing down now.
Since it is really hard for a robot vehicle to differentiate people from say a concrete wall, a bunch of realistic statues, or debris that has fallen on the road, the vehicle should take a course of action which will minimize damage to itself and it's occupant.
Religion is of the dumb, disabled, and old... Lives saved by driverless cars will be magnitudes greater than any safety precautions taken by the auto industry in decades. It will revolutionize society. This shit is a game changer.
How can the car recognize pedestrians fast enough to plan to crash into the wall, but not fast enough to slow down? Where is the car driving fast enough for this situation to occur?
Why should the car be programmed to drive into a wall? What prevents an error from setting off the car's drive into a wall feature?
Like >>7811120 said, the car will not pick a scenario, it will depend on the programming of the car to pick a solution (you have to specifiy in the thought experiment how the car is programmed), assuming ideal technology to allow perfect computer vision etc.
On the matter of success assuming least net loss programming involving the possibility of killing its own passengers, the car would not under any circumstances be a success if it involved a scenario where the passengers get injured "for greater good" (ie save a lot of other people). This is because of most people's survival instinct being THE priority in any real life scenario involving themselves, unless somebody's utilitarian moral convictions are strong enough to overpower the survival instinct, which is exceedingly rare. That's my take on it.
And this was my argument. The other guy shrugged off a biological predisposition to ones own survival as "arrogant and selfish" and said that the vast majority of motorists would accept scenarios of their own inevitable death for the greater good, as they will value least net loss of life over their own life in every conceivable scenario, and the car would promise greater overall safety. He also argued that people will flock in droves to buy these driverless cars simply because "people are lazy"
>the vast majority of motorists would accept scenarios of their own inevitable death for the greater good
Kek, that's a ridiculous argument coming from someone who probably has never had any real world experience. Participating in thought experiments like the train running over either the 1 worker or the 5 people and picking the more utilitarian option (1 worker) is vastly different from any thought experiment where the self is involved. I doubt even the person who proposed this would want a car that could kill him, barring downright delusional conviction in his morals.
I agree with this, but this is assuming that the only available driverless car on the market would be one whose programming includes the possibility of killing its occupants in a "least net loss of lives" scenario. I initially thought that the thought experiment was more geared towards comparison of different types of programmed cars (one that includes occupant-killing if necessary, and one that doesn't), but if we assume that the only new tech available as you said HAS to include passenger-killing scenarios, then yes, the chances of that happening would be significantly less that human dumbassery-caused car accidents. And most people would follow the driverless car given one choice.