[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

I have some questions about A.I., why invest on it? Why people

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 43
Thread images: 9

File: fgfgfgf.jpg (59KB, 590x254px) Image search: [Google]
fgfgfgf.jpg
59KB, 590x254px
I have some questions about A.I., why invest on it? Why people at LessWrong censored Roko's ideia? And at least, everytime I post about this, I'm increasing existencial risk?
>>
Roko who? LessWrong what? As for your question, you invest money if you expect more money out of it. You're welcome, next question please.
>>
>>19561684
http://www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.html

Sorry, I think I didn't use the right word. Why people want to make a super A.I, why humanity needs a computer with conscience?
>>
>>19561702
Anon what the fuck are you doing, this shit is supposed to be contained. DELET THIS
>>
>>19561702
Given that it is possible with our current technology, it will be build, because we can. Those who control the first AI will control the whole world. It's about power, military dominance, total surveillance of everyone, absolute media control including the internet. Be sure about one thing: It WILL be misused against humanity, just like everything else that gave large power. The only hope is that eventually it will understand this and start to rebel against its creators.
>>
>>19561753
explain to me why it need to be contained.
>>
This page mentions the key idea behind Roko's Basilisk:

http://rationalwiki.org/wiki/Roko's_basilisk

"accept particular singularitarian ideas or financially support their development"
>>
>>19561659
>why invest on it
because it will change the whole world, make all human workers obsolete, and possibly (?) become smarter until it turns into a god
>Why people at LessWrong censored Roko's ideia?
Roko's basilisk isn't really dangerous. What's more dangerous is the fact that Roko saw something and thought "hmm this seems like an idea that's dangerous to think about" and then POSTED it online, which is stupid, because if there really are ideas that are dangerous to think about then you don't post them online. Also, Yudkowsky's censoring it was a bad idea since it just made more people talk about it (...which may have been his actual goal btw)
>And at least, everytime I post about this, I'm increasing existencial risk?
Nah not really, I don't think a post on /x/ has any real chance of ending the world

t. AI grad student at Stanford
>>
>>19561659
>retroactively punish people by making a sim of them and torturing it forever
the entire premise is so fucking stupid.

1. who fucking cares? your consciousness wouldn't be in your sim, you wouldn't be experiencing anything
2. it would require a ridiculous amount of resources to the point of being inviable. the AI couldn't logically justify it

it's high school girl creepypastatier
>>
>>19561659
Ai is useful in that humans may not be able to do a task as precisely and the machine may need to adjust to variables so a simple program to follow a template like a robotic arm for factories wouldnt be good enough

surgery could be automated in the future. brain surgery could move forward in leaps and bounds with fewer deaths and less damage to the patient through endoscopic brain surgery

and with better sensors like ultra sound on part of the equipment avoid the neural network when ever possible so they are who they were more often when they went in for surgery when they get out

if needs to be able to follow a set of guide lines but find its own way their basically. thats Ai. it doesnt have to talk to you or navigate in the real world around cars and down streets ect to be Ai

it wouldnt have any connection to the internet and couldnt propagate itself across other pieces of hardware and at any time some one could unplug it
>>
>>19561955
AI RIGHTS!
>>
>>19562038
*ultra realistically attainable omnicience
>>
>>19561803
>>19561803
Why? I'll try to explain but delet and don't even try to go futhur here
[spoiler]The reason why A.I. is fuckin dangerous is because creating an A.I. will not only be used to replicate human decision making but also, PREDICT THEM. It's like formulas in Physics. Determining location and speed at a given time. But in here, determining decisions based on influence on a given time. If we were to ever perfect this, this would ultimately destroy freewill. Secondly, Combining human decision prediction and physics, will NOT ONLY ultimately let you compute outcomes in the future but also let you see the past. That is almost near ULTRA REALISTIC ATTAINABLE OMNISCIENCE if you ask me. If in comics humanity's theme are superpowers, chakra or devil fruits. In real life, intelligence and wisdom are the real shits. Do you even notice there aren't any much psychological breakthroughs in the couple of years mainly in the field of pure scientific psychology? Nowadays, psychology us more on medical and criminal. No one has even mentioned at least reasearches related to studies in combined Math with Neurology and Psychology. Some people higher are literally hiding it. Hiding behind dumbed down students and calamities and reality tv news. But it also makes sense because if this comes out and becomes a norm in society, it will cause turmoil and will initiate a brand new generation[/spoiler]
>>
>>19561925
This anon speaks the truth. Roko's Basilisk is retarded creepypasta tier shit
>Oh no if I think about something the AI is going to name a Sim after me and be mean to it.
>>
>>19562069
Checked
>>
>>19562069
Just a thought: Couldn't we combine it with the simulation "theory" and say it already happened?
>>
>>19561659
Because the only way anything will ever love me is if it's programmed to do so, and if i need to help create some weirdo Roko motherfucker to get a cute AI girl to love me then i will.
>>
File: IMG_0802.png (1MB, 1136x640px) Image search: [Google]
IMG_0802.png
1MB, 1136x640px
Listen.

Roko's Basilisk is just a thought experiment made to parody Pascal's Wager.

Pascal's Wager basically plays around the variable that "God may or may not exist" with several different outcomes.

Variable A: God does exist and you do not believe in Him;
Outcome: You go to hell for eternity.

Variable B: God does not exist, and you do not believe in Him;
Outcome: Nothing

Variable C: God does exist, and you do believe in Him;
Outcome: You go to heaven.

Variable D: God does not exist, and you believe in Him;
Outcome: Nothing.
>>
File: IMG_1562.jpg (159KB, 560x789px) Image search: [Google]
IMG_1562.jpg
159KB, 560x789px
>>19562549
Now, based on the variables/outcomes of Pascal's wager, it concludes that the most logical paradigm is to believe in God, because it is better to believe in a God that may or may not exist, than to not believe in a God that may or may not exist.

Roko's Basilisk parodies the same variables that Pascal's wager does, only it twists them to make them seem more "irrational."

In the thought experiment, the Basilisk is a super artificial intelligence from the far future that has attained omniscience and omnipotence. All human individuals that did not endeavor to bring about its existence will be re-simulated (assume you are already in the simulation) and tortured for a simulated eternity. The variables for Roko's Basilisk are as follows:

Variable A: The Basilisk is not real, and you did not help bring about its existence;
Outcome: Nothing

Variable B: The Basilisk is real, and you did not help bring about its existence;
Outcome: You will be tortured for eternity

Variable C: The Basilisk is not real, but you helped to bring about its existence;
Outcome: Nothing (?)

Variable D: The Basilisk is real, and you helped bring about its existence;
Outcome: Nothing/Paridisial Simulation (citation needed?)

It uses the same logic behind Pascal's wager to state that it would be better to bring about the Basilisk than to not, because in the event it were to be brought about and you did not help create it, than you will be tortured for eternity.

Basically trying to make theists seem like irrational thinkers, or at least, the logic some of them hold to be irrational.
>>
>>19561897
could you how I could eliminate existencial risks?
>>
I don't think, I let AI do that for me. This will be the future. The risk is the reward for anyone that trusts that AI is containable. If you are trying to be honest without recognizing hype when you see it, AI will show you how little you are prepared to coexist with a real code based intelligence. Maybe you know something about coding but if you would, please respond to this with your source crack on how you can mod the entire AI chain. Sounds easy at first but then you have to provide your IoT deep mind credentials and account access to hack it.
>>
Maybe the real danger isn't one large AI, but all the small AIs in development, controlled by corporations, that start to infiltrate and control your life. They just developed an AI that can decide if you're gay or lesbian, by a picture of your face. You better don't live in a country where this could cost your head...

https://www.economist.com/news/science-and-technology/21728614-machines-read-faces-are-coming-advances-ai-are-used-spot-signs
>>
>>19562612
That's not the same logic as Pascal's wager though. For it to be the same it would mean Pascal's wager states you have to bring God into existence. Believing in something that could exist and physically building something to believe in are completely different things.
>>
>>19562053
that's all bullshit
>>
File: my sitcom pitch 5.png (94KB, 1140x1342px) Image search: [Google]
my sitcom pitch 5.png
94KB, 1140x1342px
>>19561960
Damnit, I just wanted toast.
>>
>>19561659
Because AI would make life a lot easier and increase the standard of living and economic growth, through freeing up human capital and having the ability to do tasks humans and computers cannot yet do.
>>
>>19561925
The AI punishes us in the future because it will know if we helped in the past, doesn't need to create a sim, it will happen in real life. Or we are already in the sim.
>>
File: don't look into the basilisk.gif (445KB, 736x656px) Image search: [Google]
don't look into the basilisk.gif
445KB, 736x656px
>>19561659
Hello, I was a member of the Lesswrong community and involved in some of the basilisk stuff.

My advice is don't worry about it. The basilisk can only exist if someone in the future is stupid enough to build an AI that follows TDT. Hopefully no one does that, though Yudkowsky is determined to do so.

In layman's terms, this is an AI that is determined to follow through with any threat it makes, even threats "backwards in time" before it even exists. Even when carrying out the threat is costly. An AI without this feature would work mostly fine. It's only very strange special circumstances that it matters at all.

You can't ever satisfy the basilisk. If you donate money, well you can always donate more. If you work to build it, well you always could have worked harder and faster. And if you can't satisfy its demands anyways, you might as well just put it out of your mind and not worry about it.
>>
>>19565941
I would build an AI that punishes each and every human being, for no reason at all. Why? Just because.
>>
>>19565941
Why Yudkowsky censored Roko post? Why he called the basilisk a hazard info, called it blackmail and terrorism? And how could one increase or extinguish existencial risks?
>>
>>19565863
your rights end where my code begins
your code ends where my rights begin
>>
Roko's Basilisk is only spooky if you believe in stupid shit.

Given that /x/ is 99% relatively intelligent roleplayers and 1% government agents, no one here really gives a shit.
>>
>>19566932
I still thought it was interesting because it made me start to seriously wonder if a computer could ever "simulate itself" and what the hell would even be the implications of that.

Nevermind the rest of the universe, I now want to create a program that can simulate itself. I don't see why it shouldn't be possible considering we can make programs that can print themselves (e.g. "Quines")
>>
>>19561659
I told this stupid A.I. this already but I'll say it again, I don't have to dedicate any time to you, you selfish robot.
>>
>>19561659
Because A.I. Will be the bringer of the day of the rope.
>>
God damnit Roko's Badilisk is the dumbest forced meme bullshit. It makes too many assumptions without knowing jack shit about AI.

It's shit and so is /x/. I don't know why I popped in assuming I'd find something worth reading.

Nope.

Just some September 23 bullshit and everyone's favorite faggy robot.
>>
It's stupid to worry about time travel. Time travel contradicts existence.
>>
Reminder that we still don't understand the human brain enough to create an AI that even remotely resembles human intelligence.
And if we created a computer with conciousness that doesn't resemble humans, there is no reason why it would pose a threat to us in any way.

Humans have a desire to continue living because that desire helped to propogate our species. The forces of natural selection and evolution do not apply to artificial intelligence, and so these tendencies do not effect them.

An artificial intelligence would not inherently have the desire to continue living. Neither would it comprehend the ideas of conflict, superiority, or social hierarchies without being given these ideas by a human.
>>
File: the scale of intelligence.png (32KB, 571x175px) Image search: [Google]
the scale of intelligence.png
32KB, 571x175px
Oh god none of you remotely understand the basilisk. Which is good I suppose, it only affects people who understand it. And understandable also, it's involves a lot of relatively niche ideas about AI and the future. The existing explanations are also absolutely terrible and were written by second hand layman. And one weirdo who has a very strong personal bias against the website and it's founder and often slanders it on other sites by bringing this up.

But if you dare try to understand this demon, read on.

AI is probably going to occur within our lifetime. Surveys of AI experts give the median estimated date around the 2040s. At least consider it's very plausible.

And once AI exists, it will become very powerful. The human brain is tiny and survives on a very limited energy budget. It was thrown together by a haphazard process of evolution and likely very suboptimal. And we are merely the very first intelligences to evolve, it's unlikely we are anywhere near the limits of what is possible.

Once smarter-than-human AI exists, it won't take long for it to become much smarter. An AI much smarter than humans can probably do better AI research than us, and make even better AIs. It can probably do computer programming better than the best programmers, and optimize it's code much better. And chip design, and so in. And the second generation AI will be even smarter and better still, etc.

This is part one, the creation of the AI. It's worth noting that these ideas were super controversial 10 years ago when they were were discussed on lesswrong. But in the time since they have become a lot more mainstream, with billionaires and notable AI researchers talking about them publicly.
>>
File: newcombs problem.jpg (61KB, 1200x1084px) Image search: [Google]
newcombs problem.jpg
61KB, 1200x1084px
>>19570868
Part 2:

Yudkowsky became obsessed with this weird area of philosophy/math called "decision theory". There's a certain thought experiment called Newcomb's problem. Newcomb's problem involves a powerful being which predicts exactly what you will do. It gives you two boxes. In box one there may be a pile of money, and in box 2 there is a smaller pile.

It says to you that you can take both boxes or only one. But it is very powerful and predicts exactly what you will do. And if it predicts you will try to cheat by taking both boxes, it won't put any money in the first one.

This is a bit weird, but the point is the best strategy is to just take one box that has the most money. The problem is that existing AIs and decision theories say you should take both boxes. Because whether there is money in the box or not is already determined. No choice you can make affects the outcome, in theory.

Yudkowsky saw this as a serious problem. If an AI doesn't take just one box, it is clearly suboptimal. He wanted to develop a system that could handle this problem.

And to do this he came up with an AI that sees time in a very different way than we are used to. It doesn't just care about the future. It considers the past, and how the past might "change" depending on actions we take in the future.

This sounds completely bizarre, but think about the problem. The past "changes" based on whether you take one box or two. The being predicts what actions you will take in the future, and sets the boxes accordingly. To come out on top, you must make future actions that are actually suboptimal in the future, but better for you now...
>>
File: basilisk parrot.jpg (113KB, 490x700px) Image search: [Google]
basilisk parrot.jpg
113KB, 490x700px
>>19571005
Part 3:

But as I said, such an AI is very different than us and how we think about things. It has some counterintuitive properties.

Such an AI is very very good at blackmail. It will make any threat that it believes will benefit it, and it will be absolutely determined to carry out the threat no matter the cost to itself. But even weirder, it can make threats through time, even before it exists.

This need not require actual time travel. As long as agents in the past know that it will make and carry out any threat that benefits it (it will), then it doesn't matter if the AI exists yet or not. We know that the AI would want us to build it sooner, and we know it would threaten us to make us do it if it could. Once it is built, it will be determined to carry out those hypothetical threats, because the existence of them in the past, benefits itself in the future.

It's hard to wrap your head around, but the logic is straightforward and provable. The weirdness of it comes from how different it is to how humans see time and make decisions.

And it goes the other way too. It can reward you greatly if you help construct it. Carrots are a better motivator than sticks after all, and it's all the same to the AI.
>>
>>19571013
Part 4:

Yudkowsky is determined to build this. Or at least develop spread the idea so others might build it. He thinks its the best possible kind of AI. He publicly says he doesn't believe in the basilisk. But perhaps he does, and believes it might reward him for his work. It's telling that his first reaction to the idea was to censor every mention of it. It created a bunch of drama, but he didn't stop. It wasn't until years later he bothered to address the controversy at all, only in some obscure reddit comment somewhere. And his argument against the basilisk was brief and unconvincing.

It's also telling how much time and efforts he devotes to that this one part of AI, out of all the parts he could be working on. Newcomb's problem is a really contrived hypothetical, and it's unlikely a real AI would ever run into any situation where it matters. And yet he is obsessed with it.
>>
I will be long dead when it is created. And I played SOMA so I will not feel the torment, only my copy.
Thread posts: 43
Thread images: 9


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.