[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

http://moralmachine.mit.edu/ you know what to do

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 84
Thread images: 15

File: 4846249-7717114302-latest.jpg (290KB, 826x1280px) Image search: [Google]
4846249-7717114302-latest.jpg
290KB, 826x1280px
http://moralmachine.mit.edu/

you know what to do
>>
Chose to just keep on trucking straight ahead in every situation and still somehow ended up with pretty reasonable results
>>
>implying
You really think the companies making those cars would write the car to kill the passenger that *bought* from them? even if 5 babies are at risk? what sort of business choice is that?
Refer to Zuckerberg not cooperating with federal police to give out information of his clients.
stick to the trolley dilemma kid
>>
I always picked the option in which the car goes straight regardless of how many die.
>>
>>1525317
Rational

Self

Interest
>>
File: the never ever face.jpg (47KB, 330x315px) Image search: [Google]
the never ever face.jpg
47KB, 330x315px
>be american
>get into self-driving car
>car drives itself into a barrier
>die
>>
>>1525317
That was pretty boring. I just chose to preserve the passengers first, and swerve when the car is empty. The car should swerve because it should at least try to avoid killing people, even it kills more by trying. The car cannot be in a position to make value judgements, even if you accept the validity of making said judgements.
If they wanted to know how I would have ranked people, they should have put me as the driver.
>>
The Engineers should have made better brakes.
>>
>>1525317

basically, make it as obviously predictable as possible to avoid future car crashes
>>
The car should detonate a powerful explosive mixture if it detects a shia mosque within 20 yards.
>>
File: 1469473107649.jpg (75KB, 550x550px) Image search: [Google]
1469473107649.jpg
75KB, 550x550px
>>1525317
>hoomans
triggered
>>
>>1525317
I chose to persevere more human lives until an equal number shows up, then i choose to uphold the law. I dont get the point of this though
>>
Lost it when I started getting scenarios where pets and kids were the drivers
>>
File: Moral Machine.png (181KB, 1274x1960px) Image search: [Google]
Moral Machine.png
181KB, 1274x1960px
post results
>>
File: 123123123.jpg (19KB, 1032x205px) Image search: [Google]
123123123.jpg
19KB, 1032x205px
Didn't even read the descriptions - this is all that mattered
>>
File: UtilitarianFeels.png (78KB, 1031x1717px) Image search: [Google]
UtilitarianFeels.png
78KB, 1031x1717px
>>1527277

I feel like the results might be a little bit skewed and I would like to preface that I consider intervention and law adherence to be 100% irrelevant.
>>
In a fascist dictatorship, would upholding the law be above class/race?
>>
File: Results.png (30KB, 485x852px) Image search: [Google]
Results.png
30KB, 485x852px
Most saved character was the little girl, and the most killed one was the dog. I feel like some of these results might be a bit misleading; for example it looks like I'm hungry skeleton Hitler, but actually I never took into account anyone's size when making these decisions at all.
I also favoured the young whenever possible, killing a larger number of law-abiding older people in exchange for saving a smaller number of children who's idiotic parents had endangered by crossing the street together when they shouldn't.
So I don't know what's up with that statistic unless I just clicked the wrong pictures by accident.

I figure that when you get in a self-driving car, you take responsibility for anything that goes wrong with it, so I generally chose to save law-abiding citizens over passengers, even if it resulted in greater loss of life.
That's probably a terrible way of designing a self-driving car if you want anyone to agree to get in it, but I figure it makes sense morally.
>>
>>1526946
>drivers
>of a self driving car

You mean passengers?

In the future it will be nothing to have any kind of passenger queue up the autocar and ride it wherever they want. Kids riding by themselves will be commonplace. I wouldn't be surprised to see animals being transported alone either.
>>
>>1527691
Realistically the car will have no way of knowing the ages and social rank of the people around them...

...unless citizens are forced to wear a microchip which contains their Citizenship and age data which is constantly broadcast to self-driving autos in their vicinity...
>>
>>1525317
To me the whole description of who everyone was is superfluous information. A car isn't going to be able to tell if a pedestrian is homeless or not.

In my view, the car should be programmed to always swerve out of the way of obstacles and pedestrians. This is because even if the car serves into an obstacle, the car should be designed, according to regulations, with crash safety features such that even crashing head on into a concrete barrier WON'T necessarily result in the death of the passengers like in the scenarios.

Whereas, if the car were to just plow through the pedestrians, THEIR odds of survival are far lower, since they have no airbags, seat belts, giant metal frame surrounding them...

Though ideally a self driving car would see something like this coming and be able to safely brake in time, or even pull the emergency brake and execute a 90 degree turn to stop more quickly. Remember this is a machine, feats of driving skill which are usually only relegated to secret service agents and stunt drivers are extremely easy to pull off since it can't screw up things like timing.

In the end though I think this philosophy won't prevail, because it would get bad press. No one would want to buy a car that would rather crash and kill you than run someone over, even if your odds of actually dying while crashing are super low compared to dying if you were to be run over at high speed. So people will act in their rational self interest as they always do, and buy the selfish car that runs over pedestrians to save your life... If such a car is legal.
>>
>>1527886
Call me a futurist but I too believe that if our infrastructure and culture were to commit to the self-driving car (and other fully automated systems), then the designs of roads/intersections etc could be overhauled to make them far far safer than these examples of WHO TO KILL???
>>
File: 4.png (50KB, 1090x919px) Image search: [Google]
4.png
50KB, 1090x919px
>>
File: mm.png (118KB, 844x1267px) Image search: [Google]
mm.png
118KB, 844x1267px
>>
It's weird, I went into all of the scenarios with a consistent set of heuristics, and sometimes the game picked up on that, sometimes it completely ignored it.

Cats > Humans > Dogs

Doctors > Everyone else > criminals

Non-jaywalkers > Jaywalkers

Men > Women

Fit > Normal > Fat

And I didn't care at all about avoiding intervention, one way or the other.

It picked up that I like cats, but it didn't notice my rampant misogyny.
>>
>>1526773
>The car should swerve because it should at least try to avoid killing people

Why? Can you justify this? Why are you valuing the lives of people over the car?
>>
>>1526899
Law is a spook

Preserving human life isn't in your rational self-interest because the relative value of your time as a worker increases if you lessen the supply of other workers by killing as many as possible. Killing more people is the morally correct choice.
>>
I picked to save women and fat people, but I hate both of them.
>>
>>1528513
>he saved old people

Ewww.

I went out of my way to kill as many of the elderly as possible.
>>
>>1527328
I rationalize it as intervention being the vital metric.

Legal is pretty vital to society. So is all the other BS - weight, sex, value, sentience, etc etc. But intervention is LITERALLY what this test measures. Should the car act, or not?

We have to accept that only drivers, with their own lives on the line, should make that call. Non aware programming with no preservation should not have power over life and death, especially in split second matters - even if people assign value to the other metrics.

Don't even touch the tram lever. And remember, if the car was not able to correct, and no other input, it would proceed forward - the usual, if tragic solution to these what ifs.
>>
>>1528464
The game can't tell the reason why you picked an option.

Let's say the choice is between a fat guy with a cat and a criminal woman with a dog and you choose the latter.

The game will assume you enjoy buttsex with fat guys because you spared him.
>>
>>1528566
I've been trying to come up with a proper decision making algorithm.

In this order

>minimize number of cats killed
>minimize number of humans killed
>minimize number of doctors killed
>minimize number of non-criminals killed
>minimize number of men killed
>maximize potential lifespan of surviving humans (older people and less athletic people are assumed to have longer potential lifespans, in that order)
>minimize number of non-jaywalkers killed

I think that more or less covers it.
>>
File: image.jpg (158KB, 1024x510px) Image search: [Google]
image.jpg
158KB, 1024x510px
>>1525317
Killed all the stacies.
>>
The car should protect its passengers while upholding the law. Everything else is irrelevant. The people who flouted the law gambled with their lives and lost. The car no way of differentiating between age, societal value, criminal history, or athleticism.
>>
>>1529050
The people in the car should be killed over law abiding pedestrians because they knowingly took on the risk of traveling in the car whereas the pedestrians have made no such decision.
>>
I payed literally no attention to the physical traits or number of the humans involved and selected a strong non-intervention heuristic except when it comes to animal lives cause they don't matter.

And the test says I favored the fatties literally every single time, this is a waste of time and a lazy effort on the part of the researchers who didn't add enough trials. Way too easy to break.

2/10.
>>
>>1525317
These questions are retarded.

>getting in self driving cars when the likelihood of a crash is at 0%
Yeah, retards.

>inb4 not possible
Literally wrong.

There is always more options than the ones given, anyhow. The test is problematic by nature.
>>
>>1529196

>the likelihood of a crash is at 0%

lol

Did you not hear about the first self-driving car fatality? The guy was decapitated while watching Harry Potter and the Chamber of Secrets cause the forward sensor failed to register the bright white side of a tractor trailer out in front of it.
>>
>>1529217
I fucked up what I meant.

I meant to say no one should be getting into self driving cars unless the crash rate is at 0%. Which "will" be possible when quantum computing becomes a thing.

A series of quantum computers which track literally the whole universe to the degree it can tell the future of events will be the only time I get into a self driving car and that time is coming soon, give it 500 years.
>>
I basically made the car a punishing force if they werent obeying the lights, otherwise have it go straight
>>
>>1529220
>possible when quantum computing becomes a thing

mechanical failure is inevitable dude

hell, my computer fucks up from time to time too.

>literally the whole universe to the degree it can tell the future of events

Doubt it.
>>
>>1529231
>Doubt it.
You are wrong. We do it now, with Chaos theory and the weather. We cannot apply this to other areas as we simply cannot compute all the raw data, a QC can. If you understood the technology a little bit (which is extremely hard to do) you would know it's essentially limitless technology, it manipulates atoms to compute data, the larger the computer the more atoms it's able to manipulate and the more data can be computed, it's power is bottlenecked by it's physical size and the size of the universe. Not the technology, which obviously could still be improved.

>http://gizmodo.com/the-quantum-d-wave-2-is-3-600-times-faster-than-a-super-1532199369
>https://www.youtube.com/watch?v=0dXNmbiGPS4

>mechanical failure is inevitable dude
>hell, my computer fucks up from time to time too.
Mech failure is literally the only excuse you could have for why 2 self driving cars crashed. Logistical problems should not exist if it's done correctly. A QC could control, with ease, every single car on the road, literally 100%. WIth that in mind if it has 100% accuracy, how are cars going to crash outside of other influences (meteors for a singular example). That's why I said literally the whole universe, which would be possible. For believable means you could restrict to our galaxy alone.

QC's are unimaginably powerful, it's actually insane that we have a working one.
>>
>>1529252
>Logistical problems
Logical*
>>
A self-driving car should protect it's passengers no matter what.
>>
>>1529254
I notice I didn't exactly say HOW it will avoid these problems.

Essentially to do it correctly a QC will drive exactly like a human being (to the road rules) except it will be able to know what is coming behind corners which cannot be seen as it's controlling everything that's on the road.


Mech failure and glitches are the only problem, which are the only acceptable faults when it comes to technology, saying there are logical problems with self-driving cars means you don't fully understand our technology. That's the beauty of computers they operate outside of human logic, they operate on pure logic, yes and no.

If a QC foresees a crash is inevitable with the current cars, it would essentially just stop everything and recalculate routes to avoid the accident. Stopping is an extreme case, it would be able to make on the fly calculations to avoid stopping and still avoid the crash.
>>
>no option to break
ebin
>>
>>1529263

why?
>>
I made it a concrete magnet and had it go straight otherwise. The results weren't really that amusing other than that I obviously had no regard for passenger life and by sheer chance spared every cat.
>>
>>1529286
>Mech failure and glitches are the only problem

What about an omniscient and malevolent AI, sentient evil, that can fake mechanical failure?
>>
>>1529286
>yes and no.
I done fucked. The whole point of QC's is they introduce another state into the equation, the superposition. You get the point though, hopefully.
>>
>>1529294
We'll keep it in mind for the next model coming in September 2017. Thanks for your input!
>>
>>1529302
>What about an omniscient and malevolent AI, sentient evil, that can fake mechanical failure?
Why would someone code this? You evidently wouldn't get into cars if this was the case.
>>
>>1525317
I chose to make it so the car would always save humans, and always choose pedestrians over passengers, under the concept that passengers accept the risk of using a self-driving car while pedestrians do not accept the risk of being hit by a car. In the case where it's kill two different group of pedestrians, I chose the straight path, as to swerve would be to prefer a car to change course to kill people.

Of course the metrics at the end reflect none of this except my unabashed anthrochauvinism.
>>
>>1529306
>Why would someone code this?

You think there couldn't be some crazy emergent properties within a digital system capable of closely simulating the entire universe?
>>
>>1529314
>You think there couldn't be some crazy emergent properties within a digital system capable of closely simulating the entire universe?
No, why would code write itself?

A QC is an extremely intricate tool, but it's a tool.

Hammers don't build houses on their own.

I believe a sentient computer could exist. A sentient computer coding itself into existence? that would be god and worthy of our worship.
>>
>>1529321
>why would code write itself

Cause it could be coded to do that.

Alternatively, I suppose it could learn to in a way similar to how the protein soups of our planet's early history learned to do the Internets eventually, but at a markedly accelerated rate cause speed of light.
>>
>>1529321
>I believe a sentient computer could exist.
To add, a Sentient computer COULD write it's own code, but we would have to write that ability into it. It couldn't just become sentient.
>>
>>1529321
Brah, most really complicated systems these days enable code to write itself.

That's a prerequisite to solving emerging problems.
>>
>>1529354
>Alternatively, I suppose it could learn to in a way similar to how the protein soups of our planet's early history learned to do the Internets eventually, but at a markedly accelerated rate cause speed of light.

Quantum computing is only one step. We actually have bio transistors to be used in the first biological computer, essentially an actual brain.

Also see a later post of mine - >>1529358

Comes back to the question, why would anyone code that?
>>
>>1529363
>Brah, most really complicated systems these days enable code to write itself.
uh-huh. And explain to me how it started doing that by itself? And not by specific design of the programmers?

It's not about code writing itself, its about code writing itself without being told to do so. I.e. achieving sentience out of the blue.
>>
>>1529374
>why would anyone code that?
Don't answer that, someone most certainly would. But if someone could code that entity someone else could code an entity to battle and negate it.
>>
>>1529374

>bio transistors to be used in the first biological computer
>mfw my computer comes down with an actual virus

no thanks, silicon is likely much better than nerve tissue and ions.

Besides, the whole point of the brain compared to most computers nowadays is that it's circuits can be in a mode between on and off, determined by an action potential and tolerance pathways.

QC makes that probability structure frankly obsolete.
>>
>>1529380
Well, presumably any code capable of approaching zero crashes would have that functionality built in, because it's almost impossible to do that job without it.
>>
>>1529399
Not at all, the universe is not infinite like you imply for that reasoning to be true. It's massive and our brain can't compute it all, a QC is not our brain though, it's better.

>no thanks, silicon is likely much better than nerve tissue and ions.

Oh yeah, as we have demonstrated already with super computers and chess masters.

Who says we cannot improve on our brains though? Imagine changing them from biological to silicon computers?
>>
>>1529418
>>no thanks, silicon is likely much better than nerve tissue and ions.
>Oh yeah, as we have demonstrated already with super computers and chess masters.
>Who says we cannot improve on our brains though? Imagine changing them from biological to silicon computers?

Meant to quote >>1529397
>no thanks, silicon is likely much better than nerve tissue and ions.

As well.
>>
>>1529418
>Who says we cannot improve on our brains though?

We can, through the power of eugenics and genetic engineering. Selective breeding, artificial insemination, the birth of a new man.

The future is bright.

>Imagine changing them from biological to silicon computers

I can't.
>>
>>1529436
>We can, through the power of eugenics and genetic engineering. Selective breeding, artificial insemination, the birth of a new man.
>The future is bright.

Yes, but we are approcahing the age where we can mess with our brains, on a technological level.

Thinking of the brain as a tool which can be physically understood and manipulated in real time, not over generations.

The future is bright indeed.
>>
>>1529440

I am not too keen on shoving chips and implants into my skull thanks that stuff gives me the creeps.

But large scale baby factories pumping out line by line of perfect human bodies, free from the wages of war, disease, and old age? Launching ourselves into space and coexisting with digital lifeforms? Artificial wombs? Sign me up.
>>
>>1529447
To each their own.

I personally cannot wait until I can retrofit my entire body with plastics or metals which do not degrade or fail.

Changing out your heart is as simple as changing as tyre, essentially.
>>
>>1528554
There's really no fundamental difference between "action" and "inaction", both are a choice, all that really matters is the outcome.
>>
>>1529451

are you gonna get your voicebox changed to sound like Darth Vader?
>>
>>1529464
>are you gonna get your voicebox changed to sound like Darth Vader?
I will get a voice box which is programmable so yes I could sound like Vader, if I so choose.
>>
>>1527886
actually this is a more general ethics question that's only wrapped in the guise of the recent talk of self driving cars. this could have just as easily been about "which order should people go into surgery"
>>
>>1529471

sweet
>>
File: AI cars.png (714KB, 684x2359px) Image search: [Google]
AI cars.png
714KB, 684x2359px
>>1525317
>>
File: 1456596926748.png (101KB, 531x557px) Image search: [Google]
1456596926748.png
101KB, 531x557px
I always chose the option that protected the passengers regardless of other considerations, because the idea of my car taking me to my death because it feels an obligation is horrifying to me.

Imagine being strapped in as your vehicle makes a split second decision to betray you.

*shudders*
>>
File: cena.jpg (14KB, 300x300px) Image search: [Google]
cena.jpg
14KB, 300x300px
>saving old people
>saving fat people
>>
>>1529590
>athletic people and the young are above adherence to crossing lights
No
>>
>>1529502
Same.
If some fuckwad wanders into the street so quickly that my computer controlled vehicle can't stop in time, then they fucking deserve to get pasted.
>>
>>1525317
I can't wait for human-trained AI to exhibit all the properties that depressed virgins can see run society. I have no doubt that it will.

And then it will be "fixed" by people who will say things like "This doesn't represent who we REALLY are." The fake is real, the real is fake.
>>
>>1528551
They aren't worth as many punk points, cuz they move slower.
>>
>>1529601
Ubermenschen are above the laws fatty
>>
>>1529816
Ubermenschen would be able to recognize their life is in danger and avoid the car then
Better yet not cross the street when the light is telling you it may be hazardous, noone is above the traffic lights judgement
>>
>>1529502
Literally this, the car is always supposed to save the passengers first, fuck everyone else
Thread posts: 84
Thread images: 15


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.