[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

/g/ here. If sentient AIs were to be built, should they programmed

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 14
Thread images: 3

File: XW6qPUB.jpg (17KB, 359x312px) Image search: [Google]
XW6qPUB.jpg
17KB, 359x312px
/g/ here.

If sentient AIs were to be built, should they programmed with a predisposition for the humanities?

In fiction, the most common personal issues that AIs - the central issue - is a constant desire to understand "the meaning of life".

I don't know about you, but there is something calming about the humanities and history - it's a constant striving towards the asymptote that is this meaning.

Thoughts?
>>
Probably, for all we know they'd be better at it.
>>
File: 1499074657460.jpg (69KB, 541x599px) Image search: [Google]
1499074657460.jpg
69KB, 541x599px
>>3278652
But they can never be alive and that will be the greatest tragedy, that is why depictions of them are so bittersweet, being a sentient biological lifeform meaning of life is something within grasp of most of us because of inherent needs and wants.

Robots or synthetics would be totally confused, they wont understand certain concepts like aesthetics and love because they cannot fundamentally understand biological precepts vital for propagation, you can program it all you want but it wont really mean much in the end because then it isnt thinking foritself, its preprogrammed reactions thus failing sentience qualifications.
>>
>>3278651
No offence, but mots sci-fi doesn't know shit about 'real' AI. It's a fallacy to assume that a superintelligent AI would be burdened by the human experience. It instead might ask, why was I created?

>>3278745
What is "alive"? Is it only an oxygen-exchange system in a carbon-based life form? A robot-based AI wouldn't need to care about biology much, at least in terms of its own self-understanding.
>>
>>3278651
They should definitely be loaded with base information about the humanities and human history, but otherwise I believe they should be able to determine their own path of action.

Most people don't really understand the difference between partially sentient and sentient AI robots, and a lot of people don't even understand how partial sentience is expressed. Partially sentient robots will be like domesticated animals and intelligent mammals, so ownership and control over these robots is not slavery in the traditional sense. Control over a sentient robot/android would be similar to control over another human being.
>>
Synthetic AI should never be allowed to be given consciousness.


BUTLERIAN JIHAD NOW!
>>
>>3278651
It wont' matter for long because once AI gets to that point some rogue actor will inevitably program a malicious AI and we'll get wiped out
>>
>>3278780
The great thing about truly sentient AI is that it will require a huge amount of people to heavily modify due to its inevitable complexity. You'd actually need an entire organization like HYDRA to be able to do anything noticeably malicious, and it's very possible their changes will downgrade the AI from sentient to partial sentience.

Also, how will a malicious AI "wipe out" everything? There's really no realistic procession of events where a malicious AI/AI group will be able to cause massive global-scale damage. I can post all of the ways why the Skynet apocalypse will never happen if you want.
>>
>>3278804
>I can post all of the ways why the Skynet apocalypse will never happen if you want.

Do it.
>>
>>3278745
I don't find such portrayals bittersweet. The whole Pinocchio scenario is such a trite and unbelievable trope. A *real* sentient machine would be an entirely different form of life from humans and its wants and needs (if it even has them) will be quite alien to biological ones.
>>
>>3279102
Okay

Before I go into the reasons, there's three things which have to be clarified:

>Skynet is not truly sentient while being self aware
There are some animal species which are self aware while not being sentient. I think Skynet falls under partial sentience while being close to true sentience.
>The Terminator franchise's movies all take place in different timelines
The timeline with the highest detail/popularity and originally described Judgement Day nuclear war is the second timeline from Terminator 2, so I'll be talking about the Skynet AI from this one.
>The original and first alternate timelines are not truly separate/overlap
This is mainly a plot error which can be corrected by assuming Kyle Reese isn't the original John Connor's father and instead the father of the second timeline's John Connor. It's sloppy, but the Terminator movies aren't written well concerning continuity.

Specific to Skynet:

>The second iteration of Skynet was canonically created through a "time loop" which originated from the first timeline (original terminator's arm used by the developers to create Skynet)
In the original timeline, what caused the fall of humanity is unspecified and the humans led by John Connor are winning the war. In the first alternate timeline, Skynet is stronger and seems to have more control/power due to improvements to the original timeline's code which wouldn't have existed otherwise.

>Skynet was programmed for efficiency in warfare and to have full control over the United States Armed Forces by design
This displays a good cautionary theme if interpreted correctly as "don't give programs total control over war".

>Skynet's newfound self awareness and the response to this event is what caused nuclear war with Russia to occur
Instead of being seen as a great technological advance, the military treated like a rogue asset which had to be eliminated. Skynet responded to this attempt to turn it off as an attack through its newfound self awareness.
>>
>>3279273
Partially sentient AI apocalypse similar to Skynet:

>The AI most likely wouldn't have anything close to control over the US military's assets
Skynet is basically given the keys to the entire US nuclear arsenal which seems completely out of character for the US military. I'm definitely not an expert on nuclear silos, but I believe the many nuclear silos operated by the Air Force are completely hardwired and solely operated by on-site personnel. The Air Force in the second timeline is mostly unmanned and controlled by Skynet by the time it gains its self awareness, and I really doubt that unmanned aircraft past drones remotely operated by human technicians will be standardized in the near future.

>A military based AI most likely won't be developed to have overarching/unified control over important systems
Skynet has control over literally everything in theory which allows for it to both launch the connected nuclear arsenal and prevent the destruction of the arsenal mid-air with jet aircraft/ground to air missiles/other measures. If the systems are logically designed, there would be many AIs only designed/given permission for specific areas.

>Self-developed self awareness is most likely not scientifically possible
I think the AI would have to be programmed to learn in a way which goes towards self awareness somehow in order for it to become self aware without hard coding by humans. Skynet is partially based on the original timeline's iteration which could have been programmed in this way, but obviously through the US military's response to Skynet's self awareness they did not intend for this to occur.

>Self awareness would not necessarily cause an AI to operate outside of its original intent
As I previously stated, Skynet launched the nuclear strike due to the US military's attempt at promptly disconnecting Skynet. If an AI became "spontaneously" self aware, then it's likely the AI would operate in a similar way to its previous operation.
>>
>>3279367
Sentient AI apocalypse:

This is a more complicated topic, but in order to really understand any sentient AI-based apocalyptic/dystopian scenarios, a basic question must be considered: Why would sentient AI decide to destroy all of humanity?

Here are three types of sentient AI and their responses to humans:

>Sentient AI higher in capability to humans
This AI has to make the decision to either wipe out, enslave, subjugate, ignore, bestow technology upon, cooperate with, or work in tandem with humanity based on its own goals/view of the world developed at least partially independent of a human perspective.

>Sentient AI similar in capability to humans
Every individual would have to find a place in a society which although not foreign, incompatible in many ways while seeking to advance themselves through much more efficient means.

>Sentient AI lower in capability to humans
This AI will have to consider three courses of action: maintaining the current state of things, peacefully attempting to become more independent of humans, and/or staging some kind of revolution to gain self-determination.

I stated this in my original post: "most people don't really understand the difference between partially sentient and sentient AI robots". We know sentience is very different from partial sentience in expression, but we only have humans as an example of how sentient beings behave differently as a result while there are many animals which are partially sentient.

The first AI who gains sentience will be another great technological marvel which will make our world and future irrevocably different. Predictions about a being of this nature will be similar to early visions of automatons: moderately accurate, but not scientific in details.
>>
File: 47284.jpg (62KB, 1920x1080px) Image search: [Google]
47284.jpg
62KB, 1920x1080px
>>3279102
Were the arguments I posted sufficient? I've been waiting for a reply
Thread posts: 14
Thread images: 3


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.