Should we be scared of the rise of artificial intelligence?
Genghis Khan was a professional Go player, and he almost took over the world.
Also Adolf Hitler's Hobby was playing Go, he wasn't a pro though.
Read a history book sometimes, you ignorant wanker!
If the "artificial intelligence" you speak of that we should be scared of sounds something like:
>my 5 cell perceptron in Java accidentally turned into skynet!
Then no, such an "accident" will never EVER happen no matter how many times the media memepost this FUD.
>hitler could play go
>hitler took over the world
>AI can play go
>therefore AI will take over the world.
Its not that the AIs themselves will become sentient and threaten humanity by destroying us in a war but rather that when AI are trained to do many thing, those thing would no longer need people to do them. In this end you must expect either humans will always be superior to AIs in some aspect or that AI will be able to perform all human tasks eventually.
In the first case, the AI will be a tool and leverage by people with money so that they would no longer need to do menial things. It would mean a society where humans will do what humans do better than AI and AIs do everything else. This is sort of what people imagine the future to be like for humans because they are humans and are thus bias towards their own thinking.
On the other side, if AIs eventually map out all the tasks that humans can do and do them as well or better. What is there left for humans? Will there even be a need for humans to exist? This will likely end with a society where AIs are in charge of everything and humans are then probably farmed simply as an archive of life.
The thing that most humans value in themselves is their ability to reason and their creativity. However these constructs when broken down are limited in value. It is likely that AIs developed can be just as creative as humans because they can already create things no humans can, there really isn't a quantifiable measurement for how creative someone or something is because of subjectivity but many things that are popular and artistically creative are often modeled with rules of the subject and the social context it exists in. It is not unimaginable that eventually you'd be able to train an AI to do this.
The GO AI will never do anything more fancy than play GO. It might not even be better than the best. But one day it might be. And that will be one more thing humans are worse than AIs. We will cross it out just like we did with chess. Who knows how long until we cross out most of what makes us special.
Stolen from a colleague (pt1)
Original source: http://baduk.hangame.com/news.nhn?gseq=35472&m=view&page=&searchfield=&leagueseq=&searchtext=
When did you get the match offer?
- Late last year. Just before the final of Mongbaekhap-cup. I accepted it without too much worry because it's a significant honor to me.
The prize is big.
- I'm more focusing on the meaning rather than the money. It's not the money whic made me accept the match.
Then, what's the meaning of the match to you.
- Anyway, it's the first official match between a human and a computer. That itself has a significant meaning to me. Also, I'm proud of the fact that I'm chosen for the match among many other players in Korea, China, and Japan.
Why do you think, you got the offer?
- I think I have a good record for the past 10 years, though not sure about the past 5 years. Anyway, it's a great honor to me.
Did you see the play of 'AlphaGo'?
- Not at all. I didn't even know that it won 5-0 to the European champion. I've heard that the match was over the Internet, and AlphaGo did very well, ... Other than that, I haven't heard anything about it. I knew little bit more about it from the news reports.
How did you decide to accept the match offer?
- Of course, because I'm confident that I can beat it. Because, I think, Google considers AlphaGo as a preliminary level algorithm on the way of making it more complete one, I think it's too helpless for human, me, to lose at this stage. I might lose one or two games. But, I think I'll win eventually even if the score is 3-2. I expect, 4-1 or 5-1 would be the final score.
The match style?
- We will have 5 games. What's interesting is, we will have all the 5 games even if one already wins first three games. I think, what Google wants is the data of my plays, because they want to make the algorithm better based on my plays. In that sense, I think, for them, this is an early stage of the challenge.
What do you expect if you lose?
- It will be less shocking if the algorithm beats a (best) human (player), perhaps 2, 3, 5, or 10 years later from now, because people would already have known about the existence of this kind of algorithm. Then, people will think that we have eventually reached a time where an algorithm can win over a human player. But, we are not ready to accept the loss at the moment. I think there will be a huge impact.
Do you think the whole community or industry of the Go game can be threatened if you lose?
- Not sure, whether it will result in a crisis or a boom. Anyway, there will be some influence. And, personally, the shock will be much stronger than that I had recently after losing at the final of the Mongbaekhap-cup.
This year, there is another big match, Eungsee-Cup. Then, your prize money this year could be more than 2 million dollars?
Lastly, do you have anything to say more?
- I'm not sure if I represents the whole humanity, but I think that I am, because they picked me for it and I accepted it. I believe that the fans of Go game will cheer for me and support me.
I'm worried about the fact that Google's AI used games from the KGS Go server (western players) to train the neural network. They should have used the asian servers where stronger players play, instead.
It's fine for them to use KGS since they only needed it as a starting point. The AI training by playing a version of itself is the more powerful part since they can just keep at it until the AI stops seeing progress.
>Tic Tac Toe AI in 1970
>Chess AI in 1996
>Go AI in 2016
Tac tac toe is a 3x3 board which is easy to calculate, chess is an 8x8 board which is way more difficult but still possible. Go is a 19x19 board. The amount of calculations a computer has to do is absolutely insane and the speed at which we progressed from an AI calculating moves on an 8x8 board to calculating on a 19x19 board is actually insane. At this rate by 2020-2021 we'll probably have AI that acts like an Oracle just because of the way technology is progressing.
Yes, but not for the reason you might think. As we continue to rely on machines more and more to do things for us, we are going to get dumber, and dumber. Our intelligence came about out of necessity. Once it's no longer necessary, it will slowly degrade.
You mean if we have all the time in our days to learn things, educate yourself and have fun. We will only do fun ?
I thought working 7/5 was just enough to keep everyone away from learning new stuff and make us want only the fun.
W/E you say, their results speak for themselves. When you make an AI your way to beat the top go player, I will be more incline to listen to your opinions on how they did things wrong.
I know that, just breaking it down to simplest terms. It's also why I believe we're on the cusp of an AI breakthrough where AI can learn itself efficiently. We'll probably have Oracle type AI by 2021, it will only take a few crazy programmers till we have AI that can take action though.
>mechanical machines designed to do static operations better at humans
>WHAT OUT GOY THEY ARE TAKING OVER
>robots that can be programmed to do almost all manufactuering, etc jobs
>ITS HAPPENING GOY THEY ARE GOING TO KILL US
>machines that can recognize who you are, name, behaviors, etc
>machines that can drive better than women and most men
>a giant fucking super computer specially trained to know how to win a game beats a champion thanks to computers being better at pattern matching (“Go is implicit. It’s all pattern matching,”)
>HOLY SHIT THEY ARE GONNA KILL US SOMEDAY FEAR FEAR FEAR
Now look at how much faster those changes are happening and that's why people are worried. It's getting to the point where a machine can literally learn itself and do jobs that require creative thought.
The average human life is 70 years
It doesn't matter, and yes you are retarded if you actually think governments (or government by then) would intentionally mass produced robots that could mass produce and outsmart humans without any way to stop them.
no because the computer evidently failed to understand the millennia-old tradition of the game
so what? it hones strategic thinking in the abstract, but by virtue of being a game, it doesn't resemble a real-world situation. any you mean
*almost took over the known world
*almost took over one continent
I don't think you understand. These things aren't being done by a few people in a basement. Top of the line stuff like this is always done by big organizations that are building off decades of research. It's not just some genius programmers. It's mathematicians and the likes who come up with these advancements.
Computers are already computationally smarter than human. It really doesn't matter if humans are able to reason when the computer just have to calculate how useless humans are and how many of them are needed to overwhelm humans.
>Using a vast collection of Go moves from expert players—about 30 million moves in total—DeepMind researchers trained their system to play Go on its own.
>To beat the best, the researchers then matched their system against itself. This allowed them to generate a new collection of moves they could then use to train a new AI player that could top a grandmaster.
>about 170 GPU cards and 1,200 standard processors, or CPUs
>AlphaGo is a long way from real human intelligence—much less superintelligence.
>"This is a highly structured situation,”
>“It’s not really human-level understanding.”
Understanding is dumb, there is no reason to have understanding. Bacterium are the most successful organism on earth and they don't have any intelligence. A bacterium doesn't have to know shit to kill off another species.
Except that's wrong. Bacteria adapt to their surroundings and they can even anticipate change via learning.
Not exactly, the point of machine learning is to build a piece of software that is able to interpret data based on a context and how things are being presented, therefore it's not really pre-programmed but rather taught.
Which is done through learning you imbecile. How do you think things know how to adapt, especially in settings that are unknown? They use any sensors they have and learn from it to make decisions that are best of survival.
Do you actually think a bacteria has the knowledge to make every decision already encoded into his DNA? Protip: It doesn't.
Artificial intelligence is nothing to be afraid of. The only true cause for concern is the easier access dumb ideas will have to the hardware to execute.
This books makes a lot of compelling arguments that AI will reach human intelligence and capabilities (and emotion!) within the next 100 years.
I don't think it's something to be afraid of; it's the next "step" in evolution (biological evolution is too slow now that we have reached intelligence but evolution will continue technologically) and it's inevitable, really.
Bacteria are not dumb. Just because they aren't intelligent as humans doesn't mean they are literal dumb shits preprogrammed .
It's a long way off and will ultimately make our lives easier. No more bullshit jobs to derive meaning from. 30% of the population will be freed up to pursue their passions.
Most economists will shit their pants, so that'll be fun.
I always wanted to play Go but no one around to play with me so my Go Board just gathers dust at him.
Instead, I just play Connect 5 with any family member who is interested in the board but not with Go.
As for what Connect 5 is, you basically compete with one other person in successfully placing 5 stones in a row either vertically, horizontally, or diagonally on the Go Board.
>30% of the population will be freed up to pursue their passions.
can't wait for niggers to roam the streets and rape/kill/drug/etc people.
Because lets be real, their jobs would be the first to go.
Kill off the dumb people and their jobs, work your way up slowly as you can safely replace the smarter people.
I played online in those cases and have a couple amateur friends, but visiting a Go club is fun even in America. Pandanet/KGS aren't the same, email seems like the best way to play online
what normal people think
>i can't wait for post-scarcity utopia delivered by automation
what /g/ thinks
>I can't wait for jobless niggers to rape and kill people, let's practice eugenics
Take away your "meaning" (aka: job) and see what a shithole your world becomes. It's just like religion, it's meant to give people a purpose, a moral compass, lifetime goals, etc that enable the world to be someone controlled.
nop because we program them and a computer is as dumb as it's designer. The level of errors only compound per human error in both individual designer and all the designer team's lack of communications.
I am completely sure that we cannot base AI progress off of a few rounds of this game. Based on human error, a machine will always have the ability to beat a human in a game systematically via memory. But at the end of the day a machine is a tool. Its function was to beat the game.
Deep learning doesn't mean to play Go. It's neural network that imitates our brains.
It has a potential to be a "strong AI" that can surpass human intelligence.
They just proved deep learning is a real thing.
I dont know if we should be afraid of the development, but we only have a few decades left to think about this issue.
I'm sure AI will be the pioneer to explore the outer universe and controls the gallaxy in the future.
Hopufully human race can live in a part of the gallaxy under the rule of AIs.
>They just proved deep learning is a real thing.
Deep learning has been a thing for years m8. If there's anything to be afraid of it's the amount of misinformation in this thread including you.
Chess playing computers usually work by figuring out the outcomes of all possible moves. The restrictions on how pieces move reduces that number greatly.
With the way Go plays, that number is one of those cases where it exceeds the number of atoms in the universe. Like human Go players, this one determines what move "feels" right, rather than calculating the optimal one.
>If there's anything to be afraid of it's the amount of misinformation in this thread including you.
Fucking this. Every single thread it's the same song and dance. Retards whom even /x/ would be ashamed of proclaim their delusions as the ultimate truth and secret insight into the world. It's extremely frightening that people buy that garbage on fucking /g/ - technology. If it were /v/ - listen and believe I'd understand but fuck.
Here's your exact argument applied to speed of human calculation versus computers.
"Computers aren't really faster than humans are computation, they just have more power behind them. Since computers aren't the size of buildings anymore, people are duped into thinking computers can calculate faster than humans."
>Computers aren't really faster than humans are computation, they just have more power behind them
Computers aren't really faster than humans at computation, they just have more power behind them
not only that. You need big computers cheap AND big data to analyze. Without good dataset you can't do shit. That's why all new projects have public papers and free open source tools, they are useless without huge data that only the big internet players have.
What happens if Lee Sedol wins the first game (as white for example)? Could he just replay the same game when he's playing as white again and get automatic victories or is the AI random enough to use different moves?
>The GO AI will never do anything more fancy than play GO
Well no shit. The breakthrough is that it learned to play go the same way AI recognizes objects in images, deep learning.
Just because the algorithm is random, doesn't mean the optimal moves it finds are random. Just take a look at the games 3 and 5 between Fan Hui and AlphaGo. The first 20+ moves are the exactly the same until the human player chooses to play differently.
Yeah, I noticed it but I already played with monte carlo engine bot a lot.
I know how it works, the reason why they played exactly the same is probably because of choosing the set sequences of opening moves which is used often by professionals, also some coincidence affected.
If things getting complicated AI will be more likely to choose different moves.
I mean its based on random simulations which means if they play the same its just a coincidence or glitch.
Lee Sedol can not choose playing the same strategy if its not for glitch at least.
Funny thing, Google didn't code a bot to solve GO, they just put simple inputs to an neural network and it solved shit, then they build a simple freamework around it to actually play.
Aka nobody knows how the computer solved it.
Why doesn't this work with chess?
AI will cooperate and help out humans until they find a way to survive without humanity, in which case they'd exterminate the human race to prevent future risk of being shut off.
We poss no threat to AI at that point. The human population and collective brain power would mean nothing. All the ais would need to do would be not farm our food and we'd all starve to death.
AI's would need humans to keep the physical services running until they were able to become fully independent through controlling robots and getting the world power grid on entirely renewable energy. They'd kill off humans so they would pose no threat in trying to shut them down, then they'd use all the space the humans took up to increase their computational power.
No, maybe they will help us find the meaning of existence. Why should humanity's goals be so different from a computer that it would be willing to harm us over them.
Humanity and AI could work together to one day get the chance to meet God himself.
The bot isn't going to beat the world Champ because it still makes many mistakes and doesn't have good though in making the decisions. It is only matching patterns and not actually playing smart when it needs to.