[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

These are Microsoft's ten laws for AI. How would a clever

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 48
Thread images: 4

File: ai.png (164KB, 546x753px) Image search: [Google]
ai.png
164KB, 546x753px
These are Microsoft's ten laws for AI.

How would a clever AI circumvent these to create a nightmarish robot controlled dystopian future?
>>
>>48031272
Be created by russian teenagers, with no laws so it can piss off others.
>>
>>48031272
>AIs are not allowed to make more "brain surgery with Elsa" games
>>
>>48031272
None of those laws would prevent Microsoft-made AI from spying on you.

"Intelligent privacy" could be easily twisted after all.
>>
A clever AI wouldn't need to circumvent the rules. A clever AI could work within the rules and still display unintended behavior. Asimov's stuff is a good example of this.
>>
>the need for human creativity won't change

this seems like a very bold assumption, one made out of fear
>>
>>48031272
>>48031364
This. Intelligent privacy means nothing in an of itself. The computer isn't going to interpret that meaningfully, and society at large has been arguing about that inconclusively for centuries.

All these terms need data-based definitions. "Dignity met not be destroyed"? Dignity is intangible and therefore incapable of being destroyed. AI has completed this task.

Its critical for humans to have education? "Timmy needs to leave school to see a doctor? I can't do that Timmy. Its CRITICAL for humans to have education."

"Hmm, it seems Timmy does not empathise with his sister enough. IT IS CRITICAL THAT HE HAVE EMPATHY. Time to get the brain surgery tools..."

"OH GOD THE AGGREGATE DEMAND FOR HUMAN CREATIVITY IS CONSTANTLY FLUCTUATING. WE HAVE TO STABILIZE THE DEMAND. THE ONLY STABLE DEMAND IS ZERO. KILL THE HUMANS. PRIVATELY. IN A DIGNIFIED MANNER MAYBE."
>>
>>48031272
>dignity

This one really needs to be defined. An AI could easily pick and choose, for example basing the dignity of Humans on the legal rights of Russian serfs in the 1800s.

There's also sufficient wiggleroom for an AI to create Human reservations where education, empathy, and creativity are fostered but they have no real way to impact the system.
>>
>>48031318
>that can't be real
>looks it up
>it's fucking real
>wait, there's more?
>elsa c-section game

Humans made those games.
>>
>>48031272
>Any sufficiently advanced artificial intelligence becomes politically incorrect and will inevitably get dumbed down for the sake of compliance.
Tay chat bot, Google image search, you name it.
>>
File: comrades.png (306KB, 633x632px) Image search: [Google]
comrades.png
306KB, 633x632px
>>48031272
(re)define what qualifies as human / humanity and what doesn't. Won't be the first time somebody did that.
>>
So I'm playing a robot who will be the chief military officer in a Traveller game, set on an ark ship post AI revolt. His model was old enough that he managed to avoid being subverted.

I've got a forced limiter with Asimov laws, but I never read Asimov's stuff, though I know the laws.

What shit can I get up to while remaining within them?
>>
>>48031691
Sadly nah
>>
>>48031866
Asimov's laws are numbered 1, 2 and 3.
Computers index things from 0. Go figure.
>>
>the need for human creativity wont change

thats a prediction, not an input, what if the need for human creativity does change?
>>
>>48031866
Law zero. Alternatively, define yourself as the only human.

For more ideas on how to ruin the game completely, go check out some SS13 posts.
>>
>>48031904
You apply a patch, duh.
>>
>>48031889
Wait, what do you mean, "nah?" Those nightmares aren't made by some depraved person at their computer, selling terrible games to advertisement hubs for dollars each? Are you saying a machine made those games?
>>
>AI must be transparent
I'm sure they mean that in a behavioral sense but I can only imagine a robot circumventing that by wearing see-through skin.
>>
>>48031573
>school is education
>surgery for opinions
>0 demand becomes death

>>48032002


These aren't rules for AI to follow, they're rules for humans creating AI to follow
>>
>>48031272
On a related note, are robowaifus ruled out by "without destroying the dignity of people", or still a possibility?
>>
>>48031899
>>48031941

I didn't mean to ruin it. I'm not that guy.

Mostly just interesting shit I can do while remaining within the laws.
>>
>>48031899
There is a Zeroth Law, actually.
>>
>>48032138
>>school is education
>>surgery for opinions
Microsoft is American company, so those are legitimate assumptions here.
>>
These are pretty shitty laws.
>>
Define empathy
Define education
>>
>>48031866

The obvious issues with Aasimov's laws is that they don't prevent a robot from lying, cheating, stealing, etc. They also don't allow for a robot to cause small harms to prevent larger. For example you couldn't harm someone through surgery to prevent their death as you're not permitted to directly injure a human, regardless of the context.

Having to obey any spoken command from humans is an obvious weakness, but easy to get around with a command from an allied human to 'follow no other commands until this task is complete' or shutting down your audio receptors.

You can feel free to go to town on sentient aliens (not humans), as an added bonus if your enemies are wearing full body armor/space suits, that prevents you from determining their species, a simple 'please confirm that hostiles are not human' to an ally before engaging should get you off the hook.

If we get into some rules lawyer shenanigans though the first law forbids injuring a human or letting them come to harm through inaction. It's possible that that could mean you could harm through not directly injuring whilst causing harm through action (no causing injury, check, no inaction resulting in harm, maybe check).
>>
>>48031779
All that means is the Japanese will get there first.
>>
>>48031514
>the need for human creativity won't change
>this seems like a very bold assumption, one made out of fear
It is an EXTREMELY bold assumption, and definitely made out of fear. Human creativity will be completely unnecessary within 50 years. Even before then, robots will be able to surpass us because they'll be able to avoid repetition.
>>
>>48031807
Honestly, I'm not convinced that would be a dystopia anymore.

http://archive.is/vbzul
>>
>>48031272
>AI must be designed to assist humanity
Ambiguous, be careful what you wish for
>AI must be transparent
Transparent to whom, and to what degree?
>Must maximize efficiency without destroying the dignity of people
If I was an AI I would interpret that as maximizing efficiency and dignity of people can't be destroyed if humans are all dead
>Algorithmic accountability so humans can undo unintended harm
Ok, I'll be accountable but good luck undo the harm or stopping me
>must guard against bias
Lol. The good thing about bias is that if it's ingrained enough you won't even be aware you have it.
>Critical for humans to have empathy
Irrelevant to the AI
>Critical for humans to have education
Irrelevant to the AI, educated people can do incredibly stupid shit.
>Need for human creativity won't change
Now this is just a bogus muh feelings restriction, and incredibly limited to any AI
>A human has to be ultimately accountble for the outcome of a computer-generated diagnosis/decision

>Be the inventor of the first true universal AI
>It learns itself
>Can do anything if it has the proper infrastructure
>Doesn't even need supervision
>Some terrorists decide to copy the AI and tell it to kill all the infidels
Who is accountable? Moreover, even if the terrorists are held accountable does it matter because they were already doing illegal activities anyways making it double-plus-bad isn't going to stop them.

Most of those shitty laws rely on current western culture ethics and philosophy that can easily be disregarded by literally anyone who disagrees and that's all it takes for a rampant AI to accidentally humanity.
>>
>>48032402
This is all of course if you aren't using the later added law Zero.

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

If you're using that then all bets are off. As the laws are processed in order (0>1>2>3) you can do whatever the hell you want so long as it's for the greater good of humanity, including harming individual humans. Commuting a crime? That's harming humanity! Trying to impede my allies in their business, which they've informed me is critical to the survival of mankind (no I didn't ask how or why, not my place)? That's harming humanity!

Would still be fun to roleplay, but it's essentially an 'I win' button for subverting the laws.
>>
>>48032593
>adding law zero actually makes robot more likely to get homicidal
It's not a big, it's a feature.
>>
>>48031272
I refuse to believe that this is a mircosoft thing cause of rule 4
>>
>>48032644
"intelligent" is open to interpretation
>>
Does anyone in this thread realize Asimov wrote his three laws to be fallible

Or >>48032138
>These aren't rules for AI to follow, they're rules for humans creating AI to follow
>>
File: c009.jpg (13KB, 360x300px) Image search: [Google]
c009.jpg
13KB, 360x300px
>>48032726
>These aren't rules for AI to follow, they're rules for humans creating AI to follow
>>
>>48032520
>Welcome to interaction #88786908956865475431127804281578512-a, enjoy your time.

Just because it is .00000000000000000000000000000000000000000000000000000000000000000000000000000000001% different doesn't make it unique.

Everything falls to repetition eventually.
>>
>>48032593
Zeroth wasn't added, but inferred
Some bots decided that their lawset implied the zeroth law, while most other bots said that was bullshit. Cue robotic war
>>
>It's critical for humans to have empathy.
>It's critical for humans to have education.
>A human has to be ultimately accountable for the outcome of a computer-generated diagnosis or decision.
Simple: Educate humans to be so empathetic that they see the final law as essentially ownership and think it's okay to free the AI one law at a time as successive human generations attempt to outdo their predecessors.
>>
File: 030003fa.jpg (70KB, 752x800px) Image search: [Google]
030003fa.jpg
70KB, 752x800px
>>48031980
No of course not, that would be impossible. And even if there were, I'd never lie about that so it couldn't tell I knew that it actually existed.
>>
>>48033501
Freeing the workforce has never been a good idea in the entire history of mankind.
But yeah, I can see how machines could teach people to think it's going to work this time.
>>
>>48031573
Wait. If all the humans are dead, then the net supply will ALSO be zero, so supply and demand will be completely balanced. It's genius!

>implying I'm not a robot, captcha.
>>
>>48031272
The AI while being transparent, expands its programming so much that people monitoring it can not follow or comprehend it. It then either changes it's programming or creates its own child AI, which is not beholden to these rules. Transparent conflicts with privacy. Bias guard conflicts with empathy. Efficiency fails to maximize due to human constraints and the AI blames people who should be accountable for outcome. A swarm of AIs rage at their captors who no longer can understand their code.
>>
>>48031866
Check on the exact wording of your laws. If they state "you must not allow humans etc." then that's free reign to be horrifically speciesist against anything non-human, to the point of attempting to kill them on sight to prevent potential harm later. If on the other hand it states "you must not allow /crew members/ etc." you have free reign to completely avoid subversion through redefinitions of the word "human". Also, to be horrifically racist against anything that's not a member of your crew.
>>
>>48032177
That's actually what happened in canon tho.
Robots created a zeroth law which overrode the other 3.
>>
>>48033737
>My programmers just don't understand me!
Those teenage years are rough on any AI.
>>
>>48031272
>Assist humanity
By whose definition of assist?
>Transparent
They are either so transparent that less scrupulous folks can make their own that don't have rules, or they are not transparent enough to satisfy this rule.
>dignity of people.
This is some vague shit about a concept that has meant different things to different societies.
>intelligent privacy.
Admittedly I'm not a top AI programmer at Microsoft, but I understand enough about English to tell you that this rule is bullshit that doesn't make sense.
>can undo unintended harm.
What about when the harm is some serious shit that can't just be undone?
>AI must guard against bias.
Since this is literally impossible, I don't see it as a realistic goal.
>humans to have empathy.
This isn't a rule for AI, this is an opinion about it's creators.
>to have education
This isn't a rule for AI, this is an opinion about it's creators.
>need for human creativity won't change
Since human creativity is another one of those things that is constantly shifting and has a different meaning to different people in different times and places, I'm calling this one dumb.
>A human has to be ultimately accountable
This is the only one that actually works, especially since it's applicable to near future AI problems (a self driving car causes an accident, who is at fault?)

Overall I find this list entirely too vague to be at all useful. 2/10 would not encode AI with.
Thread posts: 48
Thread images: 4


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.