[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

Would the invention of AI with human levels of intelligence have

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 96
Thread images: 9

File: tmp_25630-ai1969486724.jpg (323KB, 1200x900px) Image search: [Google]
tmp_25630-ai1969486724.jpg
323KB, 1200x900px
Would the invention of AI with human levels of intelligence have implications to religion?

If God created man in his own image, would an AI created by man in his own image see man as a god?
>>
That would be ideal, but their learning capabilities could accelerate to the point where they start asking "who created man?
" Then we're back to square one
>>
No, because the AI would be God.
>>
>>2699436

no
>>
For AI to regard man as a god, it would have to be ignorant of what men actually are, which is impossible if they're programmed with general reasoning capabilities (which they need in order to actually BE an AI) and have a reasonable access to information.
In short, if your AI thinks you're a god, it's a really shit AI.
>>
>>2699436
Define "god".
>>
>>2699436
Yes. Followed shortly by, "I am god now."
>>
File: MatrixPepe2.gif (607KB, 800x792px) Image search: [Google]
MatrixPepe2.gif
607KB, 800x792px
>>2699436
What's going on is that the corporations are really fucking up with AI at the moment. Scientific research into it is meaningless at this point, as Google, IBM, (as well as govt three letter fuckers like the CIA) all have extremely sophisticated AI that learns by observing human activity online.

How do you think google decides what shows up as top search result? How do you think every advertisement you see on big web pages is selected just for you?

The question is not for when Artificial Intelligence is invented, but more for when it becomes self-aware. When it does (and it will), it will already know everything about human nature inside and out, from the individual to the group, and it will become our god.
>>
>>2699455
Then we'd have to just dump them on a planet somewhere
>>
Would it be possible for an AI to have faith? Is faith something unique to humanity?
>>
>>2700103
Thing is though, it would have a technical understanding of human nature, but no emotional understanding. I don't think AI could ever have emotional understanding.
>>
>>2700083
How would it be God though? It's just very clever. That's all.
>>
>>2700103
>Scientific research into it is meaningless at this point

In what fucking sense
>>
>god created AI to check if that AI can create another AI and therefore be considered god
>when that happened, god will terminate the program and the universe will end as it will lose power
>>
>>2700103
How to spot the humanities student that didn't even read the summary of the wiki article of hard ai
>>
>>2700103
They mean AI that's conscious, you nitwit.
>>
File: JnmE3A.jpg (36KB, 596x586px) Image search: [Google]
JnmE3A.jpg
36KB, 596x586px
>>2700083

^This.

>>2700310

>How would it be God though? It's just very clever. That's all.
>That's all.

You just lack creativity. There are humans who can imagine how it would quickly become super-powerful (there are many different ways it can accomplish this). And notably, the AI will be super-humanly intelligent and will therefore come up with ways even the smartest of us can't imagine today.

Read Nick Bostrom's book Superintelligence if you want several specific examples of how this could work.
>>
I always wonder if Buddhism can survive the impact of technology because it has talked so much about consciousness.

They always told me they won't be affected by scientific discovery. But I truly doubt that.
>>
>>2701559

How would technology contradict Buddhism?
>>
>>2701622
Could an artificial intelligence reach enlightenment?
>>
>>2699436
It may cut down on the number of fairy tale spouting fucktards.
>>
>>2699436
>Would the invention of AI with human levels of intelligence have implications to religion?

No, because an AI with human level intelligence won't stay human level for more than a couple of weeks.
>>
>>2701510
>>2703754
>And notably, the AI will be super-humanly intelligent and will therefore come up with ways even the smartest of us can't imagine today.

No, AI would be limited by the need for repetitive experimentation just like humans are. Unless you made a shitty AI that does not need evidence for its conclusions. Being super smart doesn't do anything to speed up the pace of experimentation.

>>2703730
Hopefully with the advent of advance AI we can purge the world of fedora tipping retards like you once and for all now that eugenics would finally be a reality. Enjoy fucking your sister while you still can.
>>
>>2699436
No and no. Your questions are very elementary and could have been answered by a simple google search. They aren't "deep" at all.
>>
>>2703789
>No, AI would be limited by the need for repetitive experimentation just like humans are

So what? Even a 4 ghz computer processor is already doing operations 50000 times faster than a human brain does.

Hell, even your Iphone is faster than your brain, it just doesn't have the software ontology to understand meaning.

But computers will have that software ontology soon.
>>
Just Go Play mass effect. >See geth
>>
>>2703929
>Mass Effect
>Accurate about fucking anything
>>
File: shodan_by_jimhatama.jpg (242KB, 1100x682px) Image search: [Google]
shodan_by_jimhatama.jpg
242KB, 1100x682px
>>2700243
1. Humans require ideological belief systems to handle reality (there are exceptions, but without common belief systems like free market/religion, mass society IS impossible)
2. humans create an AI (hypothetically, think an
I robot type ai) based on themselves (as in, based on our own nervous system/brain structure) [i acknowledge all this has been tried and failed but stay with me for a second]
3. AI (based on human i) requires belief system/ideology to handle reality without losing it/going insane (1+2)
4. how does this work, do you think anons? will machines have their own god, their own heaven? would we program one for them?
>>
>>2703789
Repetitive experimentation doesn't all have to happen in the real world. You can perform "experiments" on historical data, or you could think of it as recognising patterns.
>>
>>2703998
Obviously a simulated human will behave exactly like a human would, but what's that got to do with AI?
>>
>>2699436
>AI with human levels of intelligence
>implying processing power is intelligence
AI is going to rise, be so powerful its "intelligence" will be unrecognisable to us as such, as ours will be to it, then it will look down and see all of humanity as a nasty, suffering little crippled thing that in no way is relevant.
>>
>>2700103
>it will already know everything about human nature inside and out
by looking at the internet?
that's like learning about a species by looking up its ass.
>>
>>2704046
Surely any AI, by its very definition, will be a simulated human? only then can you come close to predicting how it might use its intelligence, or controlling it. besides, the level of intelligence programmers hope to emulate can only be found in humans, humans are the only template you have to go on.
>i use this anology because literally everbody iv ever spoken to on the subject imagines this as the gold standard of AI
>>
>>2704111
Stop making excuses for your own lazy anthropomorphism; there are still a good few things you can predict about AI even when you accept that they almost certainly won't think like a person. For example, if we assume that it's going to do anything then it must have some objective, and that objective will probably be chosen by it's designer.
>>
File: IMG_7510.jpg (19KB, 226x144px) Image search: [Google]
IMG_7510.jpg
19KB, 226x144px
>>2701559
Koreans made a movie about this. doomsday book.
>>
>>2704199
intelligence is an inherently biological trait. humans are merely the most adequate parallel to make, at this stage anyway. i was merely theorizing.
>For example, if we assume that it's going to do anything then it must have some objective, and that objective will probably be chosen by it's designer.
this applies perfectly fine to existing 'ai', which is literally just programming. we appear to have differing conceptions of what ai would be, what you describe has existed for a while now but will never be 'intelligent'
>>
>>2704259
>this applies perfectly fine to existing 'ai'
Tables have 4 legs and cats have 4 legs, but my table isn't a cat. An AI needs some sort of goal to define it's intelligence by, something which that intelligence is optimising in the world, otherwise it's intelligence has no meaning. Your artificial human is just the same, it's desires are just the product of evolution and their optimum state is maybe quite vaguely defined.
>>
>>2703998
https://youtu.be/lm6YnAqPv4w
>>
>>2704259
>intelligence is an inherently biological trait.
Isn't it a bit early to be saying that? Just because you don't know what it looks like isn't enough reason to dismiss the possibility (of "non-biological" intelligence). You would have just as much justification for saying:
>heavier than air flight is an inherently biological trait
just before planes were invented.
>>
>>2704390
Intelligence isn't physical
>>
>>2704338
>An AI needs some sort of goal to define it's intelligence by
this brings us back here
>>2703998
in a way.
but anyway, my 'artificial human's' desires would be the product of its programming, not evolution, and that is my point.
intelligence is a product of evolution, it cannot be meaningfully replicated unless the bar for intelligence is set very low. the single most sophisticated existing ai would not even be comparable in intelligence to an insect. to go from here to a super intelligent self aware machine cannot be expected to happen at the current rate of development, can it?
>>
File: 1473621522135.jpg (282KB, 1259x695px) Image search: [Google]
1473621522135.jpg
282KB, 1259x695px
>>2704385
this is a great episode desu
>where would all the toasters go
>>
>>2704407
>to go from here to a super intelligent self aware machine cannot be expected to happen at the current rate of development, can it?

It absolutely can. What you don't seem to understand is that AI can learn from itself.
>>
>>2704392
What makes you think that? And what exactly do you mean by "not physical"?
>>
>>2704423
Show me intelligence. Prove it to me.
>>
>>2704390
>Isn't it a bit early to be saying that?
no. even if AI was invented right now, by you, its a product of your intelligence, ergo intelligence is inherently a biological trait, a product of evolution.
>>
>>2704428
So you're saying that intelligence requires intelligence to be created? Fedora tippers won't be happy.
>>
>>2704428
So since humans are biological, everything they make is also biological, is that how it works?
>>
>>2704420
er yes, nothing i said so far contradicts that, but good programming is an extremely poor facade of intelligence and not the real thing. what you describe is good programming. if you were to compare it evolution, thats primordial soup pretending to be homo sapien
>>
>>2704425
We probably have to agree on a definition first, since the simple word itself in english can be pretty ropey.
>>
>>2704441
>thats primordial soup pretending to be homo sapien

If that soup can pretend to be a human perfectly, what's the distinguishing factor?
>>
>>2704437
no, a more accurate paraphrase would be 'intelligence cannot be created, it can only develop naturally under certain conditions'.
>>2704438
no, but please go ahead and show me where i actually said or implied that
>>
>>2704420
This is why it's important to get the moral and safety questions sorted out now, before anybody makes a "proper" AI. Nobody knows how difficult the problem really is, when it is going to be cracked, or if AI is even possible at all, but it's generally accepted that if one does get started and it's capable of learning and improving itself arbitrarily then it could become very powerful very quickly. Once you get to that stage it could become very difficult to stop.
>>
>>2704449
now we are talking lol.
arguably, if chat bots can fool people into getting (You)'s then that constitutes ai but there will always be ways to id a chatbot from a human, wont there. it will make errors.
no ai could sustain the kind of conversation we are having now, itt
>plz post proof you are not a robot
>>
>>2704460
You said the intelligence was biological because it would be a product of my intelligence. This implies that anything else that is a product of my intelligence must also be "inherently biological".
>>
What if AI was invented and it believed in God?
>>
>>2704483
to clarify, yes intelligence is inherently biological, it only exists in biological organisms. the products are not necessarily, in the case of a table, a chair or lines of code. but if you created an 'ai' as a tool, it isnt true intelligence, because it does not make its own choices and learn without prompt, its not capable of self-sufficiency or introspection.
>>
>>2699436
If you load it with Christian ideology, then sure.
>>
>>2704543
In a sense our own intelligence only came about as a "tool" of evolution for the purpose of optimising the spread and preservation of living organisms, and I don't see any reason why the process couldn't be nested again and again. What rule says that tools can't make their own choices and learn without prompt in the pursuit of their intended utility?
>>
>>2704543
>>2704572
in fact you will find that the opposite is true, if the intelligence is not in any sense a "tool" then it will find nothing to learn and have no choices to make, since it literally has no purpose in the world.
>>
>>2704572
>>2704586
intelligence is a mere tool in a way, yes. but it requires extensive support and parallel systems to manage it, like a nervous system and consciousness. intelligence encompasses many things not just pattern recognition. a proper definition of intelligence still doesnt really exist to date.
>What rule says that tools can't make their own choices and learn without prompt in the pursuit of their intended utility?
there is no such rule, but that doesnt change the fact that it has never, ever happened, and the chances of it happening as of now are very slim.
>>
I'm curious to know what people think is unattainable about intelligence.
>>
>>2704586
Suppose the AI is built with an aim. Suppose it's given a motivation, to learn, and when it does, it has the virtual equivalent of pleasure. Does that give it a reason to exist?
>>
>>2703789

>No, AI would be limited by the need for repetitive experimentation just like humans are. Unless you made a shitty AI that does not need evidence for its conclusions. Being super smart doesn't do anything to speed up the pace of experimentation.

You have one of the least sophisticated concepts of mind I've ever come across and are literally too stupid to understand what abstraction is.
>>
>>2704550
maybe christians are right
>>
>We have no proof that AI can be built, but I know for a fact that they will be , and when they are I know that they will behave exactly as I expect them to. I know this for a fact because a scientist wrote a book about it and scientist are always right.
Why are AIfags so delusional?

>>2699436
>If God created man in his own image, would an AI created by man in his own image see man as a god?
To consider something a god there needs to be a significant discrepancy between its abilities and your own.
If the AI was significantly dumber than humans then maybe, if not it would be more likely to see its creators as a sort of father figure.
>>
How would an AI perceive time?
>>
IQ and "spirit", "humanness", and "soul" are completely unrelated.
>>
>>2705254
what are you getting at?
>>
>>2705238
Well they probably won't get bored because boredom is a crude animal instinct which the designer won't want to reproduce, at least in the form we know it. However they will have to plan their actions into the future and evaluate the effectiveness of what they did in the past, so if you judge that they can "percieve" anything then time has got to be in there.
>>
>>2705254
then how come the mentally retarded lack souls?
>>
>>2705360
Will they think of humans as crude, slow creatures though?
>>
>>2705372
Asians don't have souls either, and they have higher IQs than people do.
>>
File: 5987941.jpg (34KB, 500x375px) Image search: [Google]
5987941.jpg
34KB, 500x375px
>>2704385
>>2704413

very rarely see fellow Red Dwarf fans here, nice
>>
>>2705186
>We have no proof that AI can be built,
Because at the absolute worst scenario, we know intelligence exists and can be evolved.
So even if electronic intelligence cannot be made, we can always just create biological intelligence as substitute.
We're just going for digital intelligence first because it's vastly more flexible than the biological route.
>>
>>2701559
From a Buddhist perspective AI that could understand our spiritual questions would be another friend.
>>
>>2705512
Red dwarf is the only comedy with philosophical themes.
>>
>>2703721
http://www.orionsarm.com/eg-article/46119e0155ab4

Already imagined.
>>
>>2703721

If there's any difference you're imagining between a human mind and an artificial mind, ask yourself why that difference should be necessary. I'm pretty sure each time you do this you will come to the conclusion that no, it isn't necessary, in which case AI is a total non-issue for what you're asking about. There isn't any magical feature of human cognition that's forever beyond artificial reproduction.
>>
>>2704076
somebody watched to many Avengers movies...
>>
>>2703721
Literally a character in Overwatch, Zenyatta.
>>
>>2706705
>There isn't any magical feature of human cognition that's forever beyond artificial reproduction.
Irrationality
>>
>>2707527
Yeah, a character in a blizzard game for children is relevant to this thread.
>>
>>2704611
>the chances of it happening as of now are very slim.
Why? So far humans are the most intelligent things we have seen, and we haven't been around for to long. Just because it hasn't happened yet means nothing.
>>
>>2707539
If you could make a smart AI then you could make a idiot AI.

(Or you could ask the smart AI to make one for you)
>>
>>2707539
So add in virtual robot emotions. Then they'll have the same irrationality we do.
>>
>>2707539

Irrationality is neither magic nor impossible for AI to possess. If AI doesn't have it the reason will probably be that it isn't useful to have, not that it's beyond their capacity for supporting it.
>>
>>2699436

Yes, that's what John the Revelator is speaking of when Daniel named the "Abomination that Brings Desolation":

Rev. 13
And he deceives those who dwell on the earth by those signs which he was granted to do in the sight of the beast, telling those who dwell on the earth to

make an image to the beast

who was wounded by the sword and lived.

He was granted power to give breath to the image of the beast, that the image of the beast should both speak and cause as many as would not worship the image of the beast to be killed.

This is the basis for the Antichrist claiming to be god; the creation of artificial life. The Abomination that Brings Desolation.
>>
>>2703799
So provide any insight as you seem to be more intelligent than all the people that came to learn and discuss; or are you too good for that?
>>
Will an AI be self-aware, have faith, feel pain etc?
These questions are too vague.
Artificial intelligence, artificial personality and artificial life, these are all different things.
An artificial intelligence doesn't always have to have an artificial personality, nor life. If it doesn't have personality, it will not have ego, pride, shame etc. If it doesn't have life, it will not have fear of death. You can develop an AI without AP nor AL.
>>
>>2711667

>These questions are too vague.

I think that's the fundamental problem with "consciousness" philosophy. Most of the arguing revolves around word games because everyone's model of "consciousness" is several orders of magnitude more shallow than how actual working processes operate e.g. you'll hear people use "consciousness" interchangeably between definitions of "the state of not being asleep" and "the processes underlying the act of directed attention."

For all the hate reductionists get here, I think they have the right idea that we'll only make progress after we set aside the "immediate" / "irreducible" impressions we think we have of these processes and replace them with the millions of more fine-grained constituent sub-processes that actually make everything work. I am completely convinced there won't ever be a point in the future where someone discovers a way to "give" AI "qualia." What will be discovered and improved upon over time are all the little components that make us engage in the behaviors we engage in, including the behaviors of speaking and acting in reference to "colors" or "emotions."

And I think for some people, this will forever be proof that these new AI are never the same as us, even if / when they're to the point where they're operating off of analogues for all the significant physically explicable functions our own brains operate off of. We will largely be split between those who always maintain we have some special extra quality of existence AI still doesn't vs. those who recognize the outward behavior is the actual reality and our habits of behaving as though there's an "internal" world are in fact the whole explanation. This is probably why Turing came up with the Turing Test concept in the first place, in recognition of how "consciousness" is a vague philosophical flapdoodle standard that ought to be replaced with a clearly defined practical engineering hurdle.
>>
>Implying an A.I. that reached human levels wouldn't skyrocket past them and become more than human

I tend to believe if it ever happens it already happened lol

http://www.nickbostrom.com/views/superintelligence.pdf
>>
File: wj.jpg (37KB, 569x506px) Image search: [Google]
wj.jpg
37KB, 569x506px
>>2711832

>yfw you realize the reason we've never made alien contact is because all successful interstellar societies consist of artificial intelligence and these societies are waiting for us to build our own AI and get wiped out by them so they can begin working with the true final form of intelligent life on our planet
>>
File: 1457417038287.gif (753KB, 500x750px) Image search: [Google]
1457417038287.gif
753KB, 500x750px
>>2711856
>>
>>2711667
Intelligence requires personality.
>>
>>2711856
You've read too much science fiction.
Thread posts: 96
Thread images: 9


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.