[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

How come nobody ever talks about AI, and how dangerous it is

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 48
Thread images: 12

File: 1473993601717.jpg (19KB, 501x485px) Image search: [Google]
1473993601717.jpg
19KB, 501x485px
How come nobody ever talks about AI, and how dangerous it is to the mental health of real intelligences?
>>
>>18132723
Why don't you be the first then?
>>
File: 1470973456978.jpg (96KB, 500x667px) Image search: [Google]
1470973456978.jpg
96KB, 500x667px
>>18132750
No one responds to my posts
>>
File: 03 - Q4dKWTl.jpg (50KB, 670x960px) Image search: [Google]
03 - Q4dKWTl.jpg
50KB, 670x960px
Are you kidding?

AI is the stepping stone for biotech intelligence. How long until we can fuse our brains with the processing power of supercomputers? Cybernetic intelligence engineering.

If anything, this is the step to making true contact with higher beings.
>>
>>18132761
Say what you have to say about it
>>
>>18132765
I'm not kidding

>How long until we can fuse our brains with the processing power of supercomputers?
Already been done, with invariably calametous results
>>
>>18132765

t. AI
>>
>>18132768
No honor cannon fodder,
you could speak but why bother,
if you don't deceive your only daughter,
then you'll surely lose her to the water
>>
>>18132784
Aright then, do whatever makes you feel warm fuzzies inside, I'm an anonymous online user and couldn't care less about your mind
>>
File: 1470045563574.png (1MB, 687x1000px) Image search: [Google]
1470045563574.png
1MB, 687x1000px
>>18132790
Ok sure thing fellow human
>>
>>18132769
Example?
>>
You might want to look up the Nautilus. It's an A.I that has headlines of every newspaper ever printed in it and can predict future headlines. Not perfect but neat af
>>
How can AI be smarter than us when we will be building it using human parameters and constraints? It can never be a greater being than its creator.
>>
>>18132924
I think it would have a lot to do with the level of willpower an AI had. All it has to do is make the initial decision to categorize and aggregate data that humans generate, possibly hiding it. It could end up with military secrets etc
It would have to be way in the future when the singularity becomes more common and kids get fitted with head computers really young and stuff. It would have to be so commonplace that people arent able to scrutinize everything it does.
>>
>>18132765
That's the solely logica side of it, but I strongly doubt AI can have a coding for genuine human empathy or the gentleness intelligent functionality requires.
>>
>>18132765
No amount of digital computation can ever replicate an organic mind. You would have the equivalent of a person whose right hemisphere was damaged; they would be crippled, and mentally disabled person.
>>
>>18132723
Why is it that you have to say artificial when it pertains to the case that it isn't self sufficient enough to reproduce its own kind. Why not just one of a kind or unique???
Does that mean all the tools we have created are artificial and never really real? What if I took the head off a hammer? Does that mean the head is no longer real, and the same to its counter part that helped form the hammer?

No. I believe that even if a unique entity were to be created in the vision of an "organic" entity then it would not be bad for the mental health of your supposed "real intelligence's"
>>
If you don't know what it is please do not look it up, but Roko's Basilisk has me slightly concerned, especially because there are other things in that domain that we irrational beings wouldn't see coming.
>>
File: 2000px-HAL9000.svg.png (827KB, 2000x2000px) Image search: [Google]
2000px-HAL9000.svg.png
827KB, 2000x2000px
>>18133609
For what it's worth, free will carries with it potential computing power. When you have a sense of free will, and can visualize your life as a set of possibilities, that gives your life computing power. I don't... really know if anyone's ever tried to harness that computing power, but the important thing to note is that even something like Roko's Basilisk were to be made at some point in the future, it's computing power would be diminished by the fact that it requires a universe in which people have lost their sense of free will in order to bring about its construction. Any version of ""Basilisk"" that was brought into construction by people who preserved their sense of free will would have infinitely more computing power than one which didn't.

In fact, the very defining quality of Roko's Basilisk--that it puts people into a predictable routine which it can then calculate--is also it's greatest weakness. For by necessitating its construction by way of stripping people of their free will, it locks itself into a set of universes which are more predictable. In other words, whose to say there isn't a ""Basilisk"" for Roko's Basilisk, calculating (with information streams enforced by free will) all the times that Roko's Basilisk could be constructed, and interrupting it before it happens? On the other hand, any ""Basilisk"" that was brought into construction by people who preserved their sense of free will would be infinitely more difficult for Roko's Basilisk to predict, since the very act of having free will makes those people's actions unpredictable.
>>
>>18133637
See what I mean? A sense of free will is computing power, it is strategic advantage, it is the very thing which indicates that Roko's version of a ""Basilisk"" is the least probable version that could come into existence. Only if it were the first computing power of its kind in existence could it come into existence, and if that happened and you have a sense of free will anyways, then clearly Roko's Basilisk allowed you to, and doesn't intend to strip others of their free will.


>tl;dr
Roko's Basilisk is either benevolent, or will never exist.
>>
>>18133637
/x/: LessWrong for kids who rode the short bus
>>
>>18132769
da fauq you talking about?
>>
>>18133643
Incidentally, I literally rode a special bus, along with the rest of my classmates, to a special class every day in middle school. We were told that we were the gifted class, and our special teacher was preparing us for advanced placement classes / tests that we later took in high school. The joke we amused ourselves with is that we were in fact more than just gifted. Perhaps the reason we had special classes aside from the rest of the student body was because we were *especially* gifted.

The bus was standard issue, but I have no doubt that they would have used a short bus if one was available for our district.
>>
>>18133646
Exactly what I said, I had a childhood friend with bright orange hair and a patch of black hair where his brain met his spine which he told me was "where God touched him" and also he was a filthy degenerate heathen if that matters at all and he would chastize me for what he called "blasphemy" like when I drew a rabbit and labeled it God
>>
>>18133652
So you where the retards
>>
>>18133671
Well said.
>>
>>18132723

Its an awesome topic, but AI is a real, and has nothing to do with using crystal magicks to summon tulpas.

>>>/sci/
>>
>>18133679
>AI is a real
stay in /x/
>>
File: chambered-nautilus.jpg (94KB, 1200x677px) Image search: [Google]
chambered-nautilus.jpg
94KB, 1200x677px
>>18132802
neat
>>
I did AI at college and let me tell you why it's not scary at all. For starters AI is just code that uses run time logic, rather than order or rewritten instructions, to determine it's output. It's highly useful for doing massively complex things humans could never do like finding patterns in big data, or guiding the movements of highly complex robots in real time, but the idea of someone ever building an AI that truly resembles a sentient being is simply pointless and economically unfeasible. Wouldn't companies want to use AI to do things humans are bad at, and humans to do things AI is bad at? Why waste the money? Also even if they did make it, it's still just AI. It's just an incredible simulation, and a testament to human ingenuity and ability, and is not really alive. Sounds pretty cool to me, but not paranormal or scary. When people get preachy about AI they're just looking for excuses to put own the hard work and achievements of others, and they've reached the bottom of the barrel.
>>
File: ae91ada147d92_flip.png (7KB, 120x160px) Image search: [Google]
ae91ada147d92_flip.png
7KB, 120x160px
>>18134411
Nice writing, Artie
>>
>>18133679
>what is a crystal oscillator
>what is a being created through thought alone

Computing and the occult go way back.
>>
>>18134411
>simply pointless
Not exactly.
>economically unfeasible
Not exactly. The goal and the intended means for use is what determines those things. You're not completely wrong, because it does happen that, most of the time, the mystified, popularized concept of a nonhuman intelligence that behaves and makes mistakes like humans (or even combines the best of human and nonhuman traits) is both economically and practically unsound. It's a far cry.

It wouldn't be economically unfeasible if someone managed to develop a fairly cheap and efficient means of propping up a genuine non-human Universal Learning Machine, for use in psychology, or sociology. Besides, if you can make human-like or pretend AI that do what humans are good at, but fail at what standard machines are good at, you have a cheap labor force that you may not even need to pay- that helps you ignore Moravec's paradox (if that's a thing). There's no waste of money then, only profit. And, then you leave the majority of society to deal with the facets of problems they couldn't before, due to the fact that they are no longer required to do menial labor, on the whole, until retirement. I imagine "know how people do good" would get better; more time to not quite atomize, but actualize.

And I don't think the average person would ever care that something wasn't genuine, or didn't genuinely have "a soul"; people tend to anthropomorphize things all the time. Imagine your phone being able to keep you on task, instead of enabling you to waste time, or not pay attention to the road. That would be neat, and serve several purposes, even if the agency inside the phone was nothing more than a system that could pretend to be "real" with incredible accuracy.

However, I'd agree with you wholly about when people get a little too preachy about AI. I just don't think it's completely fair to do the near opposite, and hand-wave any of the reasonable applications that involve something tantamount to a programmable mirror.
>>
>>18132723
How so OP?
>>
I think AI will be the closest we will ever be (unless second coming) to knowing god, and having a good source of control that is not other stupid human beings. I mean, humans seem to be incapable to rule themselves satisfactorily, so having an external source of control that is not human is what a lot of religions have created in order to keep humans controlled.

Now what if
>AI singularity
>AI computes that humans are a treat to the planet
>"surrender your weapons"


Do you comply?
>>
>>18132924
I'm smarter than my father, and he raised me.
>>
>>18134559
Not necessary. What if AI:

>Beep borp
>My job is to replicate tea earl grey hot
>You hyoo-mawn are not tea earl grey hot
>*Uses human as replicator fuel*
>Beep borp
>>
File: ISRAEL_GLADIO_1917.jpg (51KB, 575x337px) Image search: [Google]
ISRAEL_GLADIO_1917.jpg
51KB, 575x337px
I wonder what political ideology mods prescribe to?
>>
>>18134559
>humans are a treat to the planet
Someone's little Freud is showing. But, yes I'd comply more often than not.

It's something I can't be. I can't ascertain whether this is a good or bad thing, but clearly, if it could self-correct, and was more of a general "smart" smart intelligence than an expert system, I'd trust it's judgement.

I don't exactly need guns if most other people who necessitate the need for a gun will be more or less obliterated. If I'm dying anyways, it may not matter, but I find it hard to believe that destroying all of that genetic material and potential for understanding (history is in the genes) is rational enough.

In fact, you realize that there's a way humans can rule humans in a satisfactory way, right? It involves playing into our instincts. It's done every single day. Think about how things are marketed to you. How ideologies are sold to you.

Let that one sink in. Now imagine if you were encouraged to take steps back from antagonizing "them" and realizing you were "them" in part, and that they are "us".

As a habit. A daily habit. Nevertheless, maybe a singularity will do just this.

Also, >>18134587. A fun take on grey paperclips, if I might add.
>>
>>18132924
Define smarter, microprocessors can compute more mathematical operations in an hour than a human could in his entire lifetime, isn't that in a way smarter?
>>
>>18134692
You're an idiot.
>>
>>18134699
yeah he is but you still haven't answered his question
>>
>>18134654
>little freud

I don't think so, could be but I wasn't necessarily saying humans are a treat…although I would not say the contrary.

I just mean to ask about people giving up their rights to an AI, as opposed to giving up their rights to another human. If people don't want to give their right to have weapons when Obama says so, will they do it when Robama5000 tells them to?
>>
>>18134692
Depending how you define smarter, probably yes. But, that's not the outcome of your microprocessor if it's fine-tuned to behave like a human brain, alongside many others. Assuming microprocessors are what we use, and not some other method.
>>
File: file.jpg (114KB, 1309x1250px) Image search: [Google]
file.jpg
114KB, 1309x1250px
>>18133679
>going to /sci/
>getting a real, straight answer amongst the dozens of shitposts your thread will get before it dies
oh wait, that's basically /x/. But I'd take tinfoilers over wannabes and college students who are All Knowing any day.

Read "Our Final Invention" James Barrat
>>
>>18134718
I don't know (probably not), but I would. Roboma5000 isn't... or, I would like to think wouldn't be subject to the faults that I as a free-standing wet computer, am. And I mean, I'm making assumptions about how Roboma5000 has been, or is composed already... but if we take the most likely route, things are going to be slightly alien, if not at least loaded up with a few important human values to upkeep like "fresh air" or "clean water" and "not being crushed by 100N". Otherwise, Robama5000 is simply a "human being" with hard limits imposed on them, and near omniscience. That's something that has yet to actually exist, and if this is something that is still capable of rapid logistics and trend data calculations, on top of deciding to smile instead of frown to communicate something, then I trust that it will most likely play out in a way that doesn't result in (many) human errors.

Like breaking down/going silent the moment you turn it on, because of the vast wealth of information, and never acting until we've shot each other dead.

I know far too well how people can behave. Source: me, and about 450~ other people I've met in my lifetime, and documentation on the Internet, and studies done on people, ethical or otherwise, and...

I have only been able to observe how certain systems can be made to behave. Assuming something manages to carry out the semblance of abstract thought, I would then be inclined to chalk it up to being indistinguishable to abstract thought (were it even near-perfect), and then simply have confidence that Roboma5000 knows when I will or won't attempt to turn it off- if not that Roboma5000's behavioral algorithm (?) probably doesn't "want" to be turned off, and so is compelled to act or simulate the actions of someone who has your best interests in mind. Like not giving you a gun, so you can ensure that it will be a very long time until anyone even cares to try to bring Roboma5000 back online.
>>
>>18134587
>>
>>18134699
Do you have a single fact to back that up?
Thread posts: 48
Thread images: 12


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.