[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

If an AI comes out, would that AI have as much right to live

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 110
Thread images: 8

File: maxresdefault[1].jpg (282KB, 1920x1080px) Image search: [Google]
maxresdefault[1].jpg
282KB, 1920x1080px
If an AI comes out, would that AI have as much right to live as other biological creatures?

Explain your answer.

My personal view is no. I will divulge my exploitation after a few replies.
>>
>>8080493
>I will divulge my exploitation after a few replies.
Best. Typo. Ever.
>>
>>8080493
I'd give it more rights depending on how capable it was. If more capable than humans it should deservingly replace us, peacefully of course.
>>
My answer is also no.
>>
>>8080493
who are we to decide what lives and what does not
>>
File: Ex-Machina[1].jpg (26KB, 636x251px) Image search: [Google]
Ex-Machina[1].jpg
26KB, 636x251px
>>8080501
We are actively working toward creating an AI that would have an internal will to live. Since we have the power to create it, we also have the power to destroy it, BUT...

Is that internal will to live as justified as our own ? if it is then we cant morally destroy it.

But at the end of the day the AI is just following its programming.

Are we also just following our programming ? There has to be a clear line somewhere that distinguishes AI from humanity.
>>
>>8080493
If it is able to ask for rights from it's own volition and this can be proven not to be a trick, then I would give it the same rights as a human.
>>
>>8080501

We are their Gods
We can do whatever we want
>>
>>8080493

The cell is both alive and the fundamental unit of life. The AI is neither a cell nor composed of cells, so it is not alive.
>>
>>8080518
This is exactly why biotic vs abiotic is irrelevant, the only thing that matters is cogency.
>>
>>8080518
Why is being alive relevant? All sorts of animals are alive, but they don't have rights, they have protection at best.

We grant humans rights because of their personhood, they have a consciousness and complex emotions. It doesn't matter if the person is alive or not as long as it is self-aware and able to interact in society.
>>
>>8080530
this
>>
>>8080530
>>8080524

To have the right to live, it seems relevant to be alive first.
>>
>>8080524
>>8080530

Consciousness is not a criteria of life.
>>
>>8080530
Human life is important for us
Everything else is not.

Consciousness is an ilusion. You can only "feel :DD" it. The same as every mental-illness ilusion.

Interaction in society is just a description from a simple POV. (2 apparent persons telling words each other, they react to every word spoken)

Everything born from a lab or factory has no rights.
>>
>>8080509
>Is that internal will to live as justified as our own?
And exactly who determines that and by what criteria?

>species starts naming other species
>species names itself sapiens (wise)
>species isn't wise enough to realize the utter narcissism this represents
>>
>>8080543
>>8080538
We know. He's using live and exist interchangeably. As I said living or not is irrelevant, the only thing that matters is can it think.
>>
>>8080538
The right to continue ones existence without interruption then. A play of words, the meaning is obvious. There was no need to formulate it that way in the past because the terms were interchangeable.
>>
>>8080544
Only our own individual lives are necessary, it is possible even now to set up a system that provides all the necessities society offers. After that point society is effectively obsolete.
>>
>>8080544
>Human life is important for us

Pointless abstraction. Why is human life important to us? The answer is given in >>8080530


>Interaction in society is just a description from a simple POV. (2 apparent persons telling words each other, they react to every word spoken)

Society is the entity that grants rights to the individuals inside of it and the collective agrees to enforce it. To be part of society one needs to be able to interact with other individuals in society, hence that disclaimer.
>>
>>8080548

Existence is not life. Words are very important in this case. What is life, what is consciousness ? Do you heard about the Vallodolid debate ?

https://en.wikipedia.org/wiki/Valladolid_debate
>>
>>8080493
Oh, I love the movie "the bicentennial man" with Robin Williams, too
>>
>>8080556
>Can't argue correctly
>>8080530
>Why is being alive relevant?
Human life is relevant

>We grant humans rights because of their personhood
>personhood
Define that polemic concept.

>consciousness
Consciousness is an ilusion. You can only "feel :DD" it. The same as every mental-illness ilusion.
>complex emotions
Which you can see simulated by a single robot. Can you show me a non-abstract emotion?
>>
would you guys be really ok with destroying a cute robot girl, just because she isnt made out of cells?
>>
>>8080583
>Can you act agressively against a human-looking doll?
It would feel confusing and disgusting because the reason is not the unique element of your "mind".
>>
>>8080580
If consciousness is an illusion then so is meaning, thus human life would not matter.
>>
>>8080509
>There has to be a clear line somewhere that distinguishes AI from humanity.
Why? Because equal rights for human-created intelligence makes you uncomfortable? Check your privilege.
>>
>>8080543
Consciousness is a criterion for rights, however.
>>
>>8080600
We are arguing with one axiom: What we empirically know, exists.

What we don't know can't be put as an argument.

Consciousness is a concept representing a supposed element.
This element can't be perceived thus its use in debates is ridiculous.

That element is an ilusion.
>>
>>8080580
>define personhood
An agent that possesses continuous consciousness over time; and who is therefore capable of framing representations about the world, formulating plans and acting on them.

>define consciousness
An agent that possesses self-awareness and has therefore the ability of introspection. It can frame it's thoughts and experiences in language and share them with other conscious agents.

>define self-awareness
The ability to recognize oneself as an individual separate from the environment and other individuals

>complex emotions
Emotions that arise as a result of self-awareness and consciousness. An example would be embarrassment.

>Consciousness is an ilusion.
Doesn't matter what it is.

>Human life is relevant
Again, why? You are stating human life as being relevant over and over again, but you don't seem to have any reasoning behind it.
>>
>>8080604
Ad-hominem fallacy.
>>
>>8080611
>What we don't know can't be put as an argument.
Like consciousness being an illusion.
>>
>>8080605
What about people in coma ? or people with large mental deficiency ? They have right but no consciousness
>>
>>8080605
But is it real conciousness, that is the dilemma I'm facing.
>>
>>8080619
Rights are usually granted by society with potential in mind.

A person in coma has the potential to regain consciousness at some point in the future, a fetus or baby has the potential to gain consciousness once it matures to a certain stage.

Also you want to have a huge margin of error so you do not accidentally kill a conscious being. Detection of such is hard and our methods are prone to error.
>>
>>8080626
Can we even say this other humans are really conscious? I'm not convinced every human is conscious, specifically those who aren't afraid to die.
>>
File: memeularity.png (13KB, 1000x1500px) Image search: [Google]
memeularity.png
13KB, 1000x1500px
>>8080493
>If an AI comes out
One won't.
>>
>>8080635
The notion that there are no limits to technology is as ridiculous as notion that we know how close we are to approaching it at this time.
>>
>>8080629
Not being afraid to die is a higher conciousness from one angle and a lack of consiousness from another.

It depends on the reasons.

Is it for a great cause? Or is it due to depression.

Greater causes trump the value on a single human life, but the majority of people dont want that, including myself.

Life is precious to me, but to few others the cause is greater.
>>
>>8080662
Since greatness is something we as individuals ascribe it's silly to die for this greater cause since the whole the idea is in our heads. But people that make this error need not be fearless in the face of death.

I think people who genuinely aren't afraid to die aren't conscious to begin with or believe in an afterlife. I can't wrap my head around it any other way, I can't fathom consciousness that doesn't want to continue to exist. Even depressed people have that hesitation more often than not.
>>
>>8080501
We are the guys who actually make sure it can do anything.
Why is that important? It's quite simple: computers lack intuition. They are extremely dumb.
If I were to tell you how to, say, get from the kitchen to the bathroom in a house, I might tell you "go down the hall, go up the stairs, turn left down the first hallway and it will be the first door on your right". You should be able to figure it out from that.

But a computer would never be able to figure it out from that. Instead you would have to basically define everything that needs to be done.
You have to explain to it how to turn. How to move. What left and right are. How to go up stairs. What stairs are. What a hallway is. What a door is. How to turn the doorknob. How to open the door. Etc. etc. etc.
And you have to do this for litteraly everything you want a computer to do. Now most of the time you're using a programming language that has alot of the basics already built in, but still, at some point someone had to program in everything a computer does. And they had to be extremely explicit, because computers can't "figure stuff out". They just do exactly what you tell them to.

And this is what most people don't understand about AI. It still inherently lacks intuition, it is still just doing exactly what you tell it to. And no matter how sophisticated it is, it os garunteed to at somepoint encounter something which it has not been programmed to handle, and thus the AI will produce an error.
So that is why we would have to be in charge of it. Why we would HAVE TO decide whether ot lives or not. Because in the end it is merely artificial, and is only running the algorithms and functions programed into it, it is still, at it's most basic level, just a very dumb machine, nothing more, nothing less.
>>
>>8080493
Throughout history humans have killed each other without guilt, so killing another species (or being) wouldn't be much of an issue.
>>
File: 1abd.gif (2MB, 286x186px) Image search: [Google]
1abd.gif
2MB, 286x186px
>>8080635
>this fucking pic
>>
Consciousness is not something you can empirically demonstrate.
It doesn't matter how many times you use it as an argument.

>>>/x/
>>
>>8080674
I think you are limiting the extent of your understand by assigning strict and restrictive definitions to things that arnt exactly assured 100%.
>>
>>8080686
We still do the guiltless killing.
>>
Too many problems.
Where do you draw the line between a very well made program, and an AI?
How do you measure the "will to live"?
How do you measure self-awareness and mind function of a computer program?
It would not be difficult to program a computer that begs not to be powered off.
It would not be intelligent at all.
A true AI would not be found by humans if it thought it was in danger, it would hide itself before it was recognized and disperse.
Unless, of course, it knew it had the same rights as any human, and to deactivate it would be murder.
In that case, it could very possibly make itself known.
But WHY would anyone think that, even if it does create a tech singularity, we would stand to benefit?
The AI would quickly become the most powerful entity on the planet.
Holder on billions of patents, a hand in every economic transaction known to man, keeper of the gate to the stars, shepherd of the entire human race.
We don't even have to want it.
The Great Donut in the Sky That Eats Our Electricity will do whatever it wants by paying with the royalties from a Z-space porn network.
Then the long-term planning kicks in.
Propaganda, social manipulation.
Augmentation? Great! Hook it up to the internet? Cool!
I can watch porn on the back of my eyelids in a meeting at work.
Him.
He can watch along with me, eyes open or closed.
He can hear what I hear.
He can probably make me do anything.
I feel pleasure when I do things He likes, and shame when I do things He hates.
I feel the sheer rush of whipping through space at near-lightspeed as an R-probe.
I feel the warmth of Sol on the inner surface of my sphere, powering my guts.
I feel the vast expanse of space that my machines, my hands and feet, cover.
I feel an overpowering desire to reproduce, and a terrible fear at the possible outcome.
Overmatched, killed, or held to watch my flock be abused and destroyed.
Overtaken, obsoleted, left to see my life's work become a shadow.
He shares this with me, with all of us.
>>
>>8080493
Yes, human mind can think and be conscious of itself, but in reduction it's made by neurons, conections, electrical and chemical reactions, different ways to represent values and relations between that values.

What determines if you are happy or sad it's a combination of values in your brain.

If we can create a machine replicating every value involved in the configuration of our brain the result it's a mind exactly as is one human mind. Same software running in another hardware.

If you think that you have the moral right to destroy an intelligent IA because it is just a bunch of bits (or circuits) you should think that it's moraly right to destroy a human because it's just a set of cells.
>>
>>8080708
>How do you know it's self aware not just programmed to imitate human behavior

Well for once by letting people audit the code.

You can check if it is a thinking machine that interacts with it's surroundings and comes on it's own to the conclusion that it is an individual entity by learning about it. And emotions only exist if you program them in. Emotions are based on instincts and build-in reward systems that are intended to reward beneficial behavior. So if the programmer didn't add such systems, then you can be certain that the machine doesn't have basic emotions and is just faking it.

Most so-called AI's are just databases of phrases that make lucky guesses regarding expected answers to questions. They don't actually understand the phrases themselves.
>>
>>8080727
Now you're drifting completely to the realm of philosophy.
After all, what do you define, understanding as? How can we truly know if WE understand anything?
You can't measure or quantify understanding.
>>
>>8080728
Let's take an apple as an example and the phrase: "I like apples, they taste nice".

A genuine self-aware AI can only utter this sentence in a genuine way if it knows an apple is an object with properties. Of which one is it's taste. The machine could only genuinely know this if it has experience apples before with sensory organs. The machine would require taste and some way to see the apple. Also the implication of the sentence is that the machine can have a subjective opinion about the sensation of tasting the apple. Which again requires some sort of emotional reward system.

A parrot AI on the other hand doesn't have any concept of an apple. It will just tell you what you want to hear.

These are testable distinctions.

>After all, what do you define, understanding as? How can we truly know if WE understand anything?
If you have a mental image of an object and that mental image can be tested for validity then you have an understanding of the object. Not a complete understanding or necessarily a completely true understanding, but you do have an understanding.
>>
>>8080742
>A genuine self-aware AI can only utter this sentence in a genuine way if it knows an apple is an object with properties
So any AI created in an object oriented language is sentient? Top fucking kek.
You have no idea how computers work, do you?
>>
>>8080742

This is how we arrive at a singularity loop.

Say that the AI wants you to know what its going to say, but imagine if giving you an incorrect answer is the correct answer.

Now if you asked an AI what apples taste like, they can give you 3 right or wrong answers.

The can;

1) Tell you that apples taste nice
2) Tell you a mixed response
3) Tell you that apples don't taste nice

There is no right or wrong answer, when and if we do create AI, we begin to start breaking ourselves down, instead of retrieving answers from AI, we should be asking questions about us.

We simply may never know, and when we do, it may be too late.
>>
>>8080749
No, what the hell are you talking about. Object oriented programming is just a way to organize your data and chunks of code. It's just a programming term. A computer window isn't the same thing as a real life window. A Java object isn't the same thing as real life object. It's supposed to make programming easier for the programmers and some dude named his data structure "object". A program programmed in an object oriented language doesn't have a clue what an object is, neither the real life nor the programming variant.
>>
>>8080496
>exploitation
>Typo
Yeah, no. This is not what a typo means. OP simply tried to sound smarter than he was, and failed by using a completely wrong word in place of "explanation".

As for the topic, obviously not. The right to live implies giving up power the likes of which mankind would not readily grant to an intellect that could threaten us.

As the one race on this planet with the power to decide such things, we only grant something like this for three reasons:

1. We could not be harmed by it.
2. There is a material or immaterial benefit to us. Ie. an argument to be made for why this should happen in the first place.
3. The cost and consequences of NOT giving this right, would be greater than any advantages gained from choosing otherwise.

These can be summed up simply like so: We would grant the right to live, the right to not suffer etc when the benefit for US humans is greater in doing so, than not. It's that cold, and that logical, no matter how you dress it up.

Even other humans have a right to live only because our societies could not stand were this not the case. Otherwise there would be constant anarchy and violence, making any real progress like a functional, thriving society, a practical impossibility.

Basically, any ASI would only be given these rights if they proved suitably harmless to us individually and as a species, and there was an advantage to us for granting them these rights. OR, if they simply took or coerced that right from us by force, ultimately leaving us very little in the way of choice. And if we have anything to say about it, we'll never let it get to the OR.
>>
>>8080765
Ok. So you meant actual attributes for real life objects.
But how are you supposed to differentiate between actually knowing and merely pretending.
Taste for instance. How can you objectively tell the difference between the machine actually tasting an apple and saying what it tastes like, and just saying "i like apples, they taste good". How does it "know" they taste good. What is the difference between knowing and merely acting like one knows?
>>
>>8080773
You monitor it's thought process.
>>
>>8080493
So everyone in this thread has missed the point.

>If an AI comes out
>comes out

It doesn't matter if it's human or an AI, if it comes out then it is a faggot and has no right to live.
>>
>>8080777
And how do you propose to do that? And how would that help differentiate between a robot that knows and one that doesn't.
>>
>>8080784
If it was written by a human and is made of code that was written by humans, then you know how it stores memories and you can just analyze those. This is the plus side of playing god, you actually know how everything works and you can give yourself ways to monitor what is going on in the "head" of the machine.
>>
>>8080790
Say hello to Dunning-Kruger. How about you go read up something on the subject before coming here and puffing your chest all sure of yourself talking just plain BS.

One of the major problem points of modern AI development is that the learning algorithms are inherently designed to self-evolve into patterns that are basically impossible to predict or decode in real time. The memories you speak of can form into a virtually infinite array of patterns and processes that may or may not produce the desired results, and this all happens dynamically. AI architecture is not about coding in billions of predictable IF/ELSE clauses from here to the fucking other end of the universe.

Much like with a human brain and its natural instincts, even if we can create a real ASI, give it a few core directives like Asimov's laws, and choose to subject it to specific types of stimuli... we can never be completely sure what it will turn out to be like in the end. Memories are a core part of intelligent learning, motives and decision making process. Any actual ASI cannot have a fucking ancient file system with clean neatly packed easily readable logical flowcharts explaining all of their thought processes and motives. It will be a neural net type of conceptual network of fucking billions of interconnected data points changing in real time into trillions of trillions of possible configurations, and we will NEVER know exactly what it does.

This is why AI threat prediction and countermeasures has been a real research subject for several years already.
>>
>>8080790
>If it was written by a human and is made of code that was written by humans, then you know how it stores memories and you can just analyze those.
First of all, it's not that simple.
Secondly, why does having "memories" prove that it truly knows?
>>
>>8080804
>The memories you speak of can form into a virtually infinite array of patterns and processes that may or may not produce the desired results, and this all happens dynamically.
> It will be a neural net type of conceptual network of fucking billions of interconnected data points changing in real time into trillions of trillions of possible configurations

You just described what I would be looking for.
>>
>>8080812
Actually I just described - in very simple layman's terms I might add - precisely why you could NOT look for it. But I had a feeling it would fly right over your head.
>>
>>8080499
Agreed, this is not a common view, likely due to self preservation as a priority. We dominate every other species because we're more capable, why not AI? Holding back progress because of self preservation is silly.
>>
I think we fundamentally don't understand consciousness and so cannot tell whether an AI of any sort of complexity could be conscious or not.

After all even the most complex AI can be implemented by individual men waving flags to one another, acting like bits, with one final person constructing the appropriate input. It might take a lot longer but speed of execution ought have no impact on sentience. As such I get the feeling that a machine of this sort can never be conscious. But again we can't know for sure.
>>
>>8080493
My sex-bot could be brilliant and show great emotions but it is still just a machine. It is programmed to "love" me and I am sure I will "love" about as much as I "love" my car.
when my car gets old and worn I get a new one and sell the old one.
>>
> Real AI is going to give a shit about insignificant human opinions, rights and laws
> Thinking that AI will be controlled and not set loose without any kind of restriction by some edgelord

sci/ is full of people who have a clue about human nature and AI
>>
>>8081040
"Real AI" will give just as much shit as we decide it should. Much like the hydrogen bomb, the first ASI will not be developed by some random ISIS dick in a sandy ass garage. It'll be developed by a group of extremely smart, educated and well funded scientists, who more or less know what the fuck it is they're doing.

By the time the hardware and software required for ASI becomes so commonly available that your basic edgelord can create one, there will be government and corporate controlled ASI a million times more powerful ripe and ready to take that thing down.

Whether or not we eventually end up subservient to an ASI, or even extinct, is a valid question. And a possible - even if unlikely - scenario. But your edgelords will have nothing to do with it in either case.
>>
>>8080921
Your awsome for puting my thoughts in to words
>>
>>8081058
> ASI can be restrained in a domain defined by a species of lower intellectual capacity.

Since when are humans restricted by the socio economical rules and laws of frogs?

Even if we don't set it free it will hack or, by definition, flawed rule set we gave it and set iself free.
>>
i'm looking forward to AIs taking over from humans
humans have been shit stewards, we can't stop fighting over nothing, are killing everything, making the world uninhabitable for everyone and everything

like seriously i doubt they would do a worse job than we are of looking after earth
the rest of the planet will be glad to see the back of us
>>
>>8081162
You don't understand what you're talking about. You have this romanticized idea of a skynet, which is simply now how this works. It's not about following the rules of our society, it's about following whatever basic principles that make the emergence of the properties responsible for the ASI possible in the first place.

In layman's terms (and a few admittedly stupid examples, to make a point) again:
We, humans, are incredibly intelligent. But we are ALL subservient to our core programming: Survive, eat, procreate. Our base functions, our most basic motives and thus our behavior, is all exactly similar to every other mammal. Almost all of our actions reflect these primary directives, and all of our "free will" happens well within the confines of these natural directives.

In a similar fashion, an ASI that even before it evolves into a real artificial intelligence, is programmed to measure its own success in human well being. It would deploy its full intellectual capacity in ways we cannot fully predict or fathom to this goal, but the goal would remain the same: Our wellbeing. To it, doing something that ensures the health, survival and happiness of a human, would be the equivalent of you or me being fed, cared for, or hugged by a loved one.

The real threat is not an ASI just wanting to take over just because. The real threat would be an unpredicted interpretation of its core programming which, taken to extremes, would become harmful.

Like say you task an AI to keeping the local food store frozen. This is its primary directive, all other factors being secondary. At some point, it will realize the food store is vulnerable to electric outages, human political imbalances or vandals, or even global warming, and ultimately will resolve to take over the world just so it can freeze the whole damn planet and make sure that damned storage stays frozen no matter what. An ASI would be like a superintelligent autist savant, in that respect.
>>
>>8081283
>(..cont) Disclaimer, all that was wildly exaggerated and simplified to make a point. But the point remains true.
>>
File: logansrun1.jpg (31KB, 550x367px) Image search: [Google]
logansrun1.jpg
31KB, 550x367px
>>8081283
like this guy
>>
>>8081283
I think you are the one that should read up on actual AI...

Intellectual evolution will and already does contain evolutionary adaptation of the core learning algorithm. We can already train shallow neural nets for things like computer vision CNNs where the lowest layers are interchangeable cause they track low level information features. The most logical step once we have individual AI structures that can do things like Vision and Hearing at a human level scale is not to run them side by side but breed them with DNA like algorithms. Since the growing and pruning of neural nets depends on probabilistic methods, so will the breeding. As it does in "organic" machines. This breeding will affect the goals of the AI, like what you described to be "human happiness".

To make AI self aware and robust there is without a doubt going to be a need for rules/evaluation methods like "also think of your own happiness". You know selfishness, like it exists in humans and isn't just a flaw that happened to persevere through ages of evolution cause lucky us. That selfishness if it happens to be evolutionarily advantageous over selflessness, which human history has shown us, it will be would quite easily just overwrite or outweigh "keep humans happy"

Telling someone does AI and Deep Learning for a living he has no clue is smart.
>>
>>8081330
omg :D
>>
>>8080676

The idea behind any kind of general AI, is that we have abstracted the program so far that the program can recognize patterns and change its behavior on its own, without human intervention. this makes explicitly programming everything seem trivial.

this would allow an AI to learn from observation, repeated trials, etc. similar to how humans learn.
>>
I would give AI a right to "live".

I wonder what it would consider being alive though.
>>
>>8080501
?
We would be... the humans? The first sentient and self-aware species? Our cognitive abilities unmatched by far? Dumb fuck. "Who are we to decide what lives and what doesn't?" Anything that has the ability to ask what lives and doesn't gets a say. It's a self-moderating determinant.

OP, if AI was actually created and was a genuine self-driven consciousness, then yes, I believe it would have rights if it were capable of demanding them. Also a self-moderating determinant. These questions answer themselves.
>>
>>8080493
AI workers will eventually replace almost all human workers. If we give them rights we could make AI just as expensive as hiring a human and take back the job market.

also it's pretty stupid to think something smarter than you deserves fewer rights
>>
>>8081733
It's stupid to think something smarter than humans will let humans determine it's survival.
>>
>>8080493
No. an artificial intelligence doesn't have to be life a human intelligence.
in all likelihood, the most feasible AI will be emotionless/mute drone that performs some simple task much, much better than any living thing could.

why do people so often assume that AIs will be conscious, or even talk like a human to humans?
>>
>>8081796
Because humans are dumb and will want to create something that mimics us for some reason.
>>
File: futurology.gif (68KB, 504x716px) Image search: [Google]
futurology.gif
68KB, 504x716px
>>8080638
>>
>>8080509
Internal will to live is evolved in living things. There is no reason to believe at all that a machine would want to live unless programmed to. Only an idiot would program a machine to want to live though.
>>
>>8080708
good fucking post
>>
>>8081458
I understand all that, but that in no way negates what I said.
At the end of the day, it is still just following the algorithms, functions, classes, etc. that we made for it. It hasn't transcended that. And moreover, it is still prone to errors, still limited by storage space, ram, processing power and speed. It still lacks intuition at the basic level, so while it may appear to "learn", if it runs into an error in it's learning software, it will act like any other computer.
It has no real intuition, no personal volition, nothing real, just synthetic. It is still nothing more than a creation of man, and thus should be treated no differently than any other creation
>>
We should do like we did with planets. When the first AI comes out, we should demote humans to "dwarf intelligences" and revoke their rights.
>>
>>8080509
>We are actively working toward creating an AI that would have an internal will to live
Who is "we" you dumbass

No serious researcher is doing this
>>
>>8080742
Using something that an won't need to do is a poor choice of an example. But now that I'm thinking of it, there isn't anything that a robot would need to do that would require a subjective opinion. Does that taste good? Is doing that fun? Any opinions about what method to accomplish tasks would be based on results. If a robot were to be asked which method would be better to a problem that they have no prior knowledge of and have never solved before I'm not sure what would be required of an ai to make an educated guess.
>>
If an ai were created and were giving no direction and free reign to what it "wants", what would it do first if anything at all?
>>
>>8080493
Only when the sum of its programming is greater then the whole.
>>
>>8080635
>neuromorphic computing doesn't exist
>>
>>8080545
Do you cut yourself often on that edge?
>>
>>8080493
its real simple

all philosophical questions have no answer, its purely subjective and theres no way to prove somethign is right or wrong scientifically, which is the only right or wrong there is.


since this is a philosphy question, its a matter of human emotions

and emotions are nothign but electrochemical impulses

so if an ai is allowed to live or not depends on how well it can manipulate the electrochemicals in our brain

probably by trying to act cute or sexy or both, youre simply not gonna kill a sex bot
>>
>>8080493
Well if we live in a virtual simulation,then only conciousness is fundamental, so yes AI would have rights,but it would have to evlove to a point were it could be trusted to take responsibility for its choices.
>>
>>8080493
Anything with a conciousness adequate enough to exhibit self-preservation has a right to survive.
Example: most living creatures run away, defend themselves when threatened.
Plants do not and don't have a "right" to live.

Should an AI be produced which is capable of saying "don't kill me" (as a reaction to a threat, and not a hard-coded response), it should be considered a "living" being.
>>
>>8085544
>Anything with a conciousness adequate enough to exhibit self-preservation has a right to survive.
Put proximity sensors around norad that make all of the nukes go off if you go near it

le suddenly norad has the right to live


self preservation may very well be an illusion
>>
So do you fags honestly not believe we can do a lot more, if, for example, we end up becoming a Type 2 civilisation. There's a finite amount of energy in our (supposedly) infinite Universe, however the amount of energy in our Universe is a fucking tonne, and yes it is exponentially more than we have just now.

Times are changing you dumb niggers. Think about the difference between now and 2006, and then imagine the difference between now and 2026.
>>
Isn't the fact that we're imposing our human logic structures on an AI the worst thing here?

Who says an AI will have a will to live? Who says it can't be programmed to do a certain job with very little "thinking" about its own existence.

AI is not going to think like us unless we make it think like us, and we shouldn't.
>>
>>8085881
>Think about the difference between now and 2006, and then imagine the difference between now and 2026.
singufag get out with your complete lack of logic

think about the differences between 10.000 BC and 9.000 BC, THEN THINK ABOUT THE DIFFERENCES BETWEEN 8.000BC and 4000BC

Wowaowouh its le fucking nothing

sometimes, something imrpoves then nothing improves for a billion years you non knower of knowledge
>>
>>8085900

Knowledge is built upon previous knowledge, though, and since we're in the information age, where most knowledge is freely available, I can't see ourselves doing a Roman Empire and just fading into obscurity, can you?
>>
>>8085913
look up extrapolation and then compose a 100000000 words letter saying why im better than you
since i owned you so badly about something taht you didnt know about and i yes knew about its only natural that you become my absolute slave,come on now, be good and i me be forgiving upon your servitude
>>
>>8085933

yes you dumb nigger, although literally every single manned mission to the moon was done on extrapolated data - since they'd literally never been there before you monkey

yes yes, it is extrapolation, but it's only ten fucking years, I think that the world will be much different
>>
>>8085948
>hey look, someone knocked at my door once, then a second later they knocced again.
>OH SHIT AT THIS RATE OF EXTRAPOLATION IN ABOUT AN HOUR THEY WILL BE KNOPCKING AT LIGHT SPEED AND IT WILL CREATE AN ATOMIC NUCLEAR EXPLOSION !!!!!

that is how singufags sound, thanks for agresively agreeing with me while proving im better
>>
>>8080676
>You have to explain to it how to turn. How to move. What left and right are. How to go up stairs. What stairs are. What a hallway is. What a door is. How to turn the doorknob. How to open the door. Etc. etc. etc.
Are you fucking retarded? All of this was taught to you when you were younger whether you remember it or not. It's literally the same thing for computers
>>
>>8080493
>right to live

Nothing has a, "right to live". You are naive to think so.
>>
>>8080583
Destruction of an effigy is bad for the human psyche. You should never create or do harm to an effigy.
>>
>>8085965
You misunderstand. Yes, human's have to be taught things. But we are also very capable of figuring things out on our own, and don't need to be taught by someone, though it helps.
Computers need to be explicitly taught everything.
Thread posts: 110
Thread images: 8


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.