[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

Google AI Defeated Chinese Master in Ancient Board Game

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 307
Thread images: 22

File: TRUNEWS.com.jpg (26KB, 701x468px) Image search: [Google]
TRUNEWS.com.jpg
26KB, 701x468px
Machine learning is quickly advancing as many countries are heavily investing into artificial ‘deep learning,’ the latest example is a Google AI program beat a Chinese Grand master at the ancient board game Go.

http://www.trunews.com/article/google-ai-defeated-chinese-master-in-ancient-board-game

>(BEIJING) A Google artificial intelligence program defeated a Chinese grand master at the ancient board game Go on Tuesday, a major feather in the cap for the firm's AI ambitions as it looks to woo Beijing to gain re-entry into the country.

>In the first of three planned games in the eastern water town of Wuzhen, the AlphaGo program held off China's world number one Ke Jie in front of Chinese officials and Google parent Alphabet's (GOOGL.O) chief executive Eric Schmidt.

>The victory over the world's top player - which many thought would take decades to achieve - underlines the potential of artificial intelligence to take on humans at complex tasks.

>Wooing Beijing may be less simple. The game streamed live on Google-owned YouTube, while executives from the DeepMind unit that developed the program sent out updates live on Twitter (TWTR.N). Both are blocked by China, as is Google search.

>Google pulled its search engine from China seven years ago after it refused to self-censor internet searches, a requirement of Beijing. Since then it has been inaccessible behind the country's nationwide firewall.
>>
>write procedural software to systematically bruteforce all future moves based on the current game state
>pick the one with the highest confidence of leading to win condition

Can this even be considered AI?
You can solve anything if you throw enough GPU compute cores at it, I wanna see this level of skill come out of a machine that actually learned to play go without bruteforcing.
>>
>>60559225
Protip: It did not brute force.
>>
>>60559225
>The board game is favored by AI researchers because of the large number of outcomes compared to other games such as western chess. According to Google there are more potential positions in a Go game than atoms in the universe

It's virtually impossible to brute force Go making it a favorite amongst AI researchers
>>
Noob here

How does such a "deep learning" algorithm/coding look like of this Google AI?

How do you design and program "deep learning" AIs?
>>
>>60559291

I'm pretty sure it's centered around allowing the program to change its own code in order to fulfill a directive.
>>
File: ACM-SIGAI.png (6KB, 145x152px) Image search: [Google]
ACM-SIGAI.png
6KB, 145x152px
>>60559291
https://sigai.acm.org/
>>
Years late fag
https://github.com/deepmind
You retards think that it can modify it's own code.
Deep learning is more or less a complex search algorithm that finds solutions faster than brute forcing.
>>
>>60559873
I take that back, it may be able to modify the source if it could be given a negative feedback, I don't know how it could do that. It's not the means to the end in OPs case.
>>
>>60559176
>Go
>Ancient board game
>>
File: 1390898338350.gif (2MB, 245x286px) Image search: [Google]
1390898338350.gif
2MB, 245x286px
>>60559176
>trunews.com
this happened years ago
>>
the real question is, could this AI beat me in a game of Sorry!?

cuckmate.
>>
>http://neuralnetworksanddeeplearning.com/
there you go, /g/.
>>
yeah untill this AI can beat me in videogames I don't give a fuck
>>
>>60559225
From what I understand Go is a very fluid game, less so than something like Chess.

And you have a set amount of thinking time for the entire game, so using brute force on every move would likely result in it exceeding the time limit.

>>60559947
It was beating a Korean last year, or the year before.
>>
>>60559225
From what I've heard, it's mostly searching via monte carlo, and the evaluation is done by feeding positions into a learning program, and making it play with it self a bunch until it's good enough.

There's probably some stuff I missed out on, but yeah, most programs, even in chess, use search and evaluate. It just so happens that this one has + tons of money.

On a side note though, actually "learning" the rules of the game is probably possible, but if the rules are well defined, and relatively easy to implement, why bother?
>>
>>60559282
The same is also true of chess...
>>
>>60560653
Chess is a far simpler game.
>>
>>60560007
They're working on a starcraft AI, so watch out.

I agree though, once they can find a robot better than Flash then I'll believe it.
>>
File: 3king.jpg (26KB, 529x399px) Image search: [Google]
3king.jpg
26KB, 529x399px
>>60560681
False. Chess has 10^120 possible game positions. There are only 10^80 atoms in the universe. Kys
>>
According to the latest interview, the current Alphago thinks 50 moves ahead due to efficiency in guessing what the likely options are.

That basically meant that Alphago will have a lead against a human by the time they got to the mid-game. And that at mid-game Alphago is already calculating end game variations.
>>
>>60559928
>implying

>Google AI finally defeats the chink
>>
>>60560681
Go is really dumb and boring. Chess is by far much more interesting and fluid. Go is only complex to a computer due to the larger grid size.
>>
>>60560688
>Chess has 10^120 possible game positions
lol
>>
>>60560721
You may be right that Chess is more interesting, but that isn't the point.
>>
>>60559305
leave /g/ forever
>>
>>60559176
Yet Google maps still gets me lost. Its all hype
>>
>>60560769
>>60560688
10^40 for possible positions, 10^120 for possible games

https://www.chess.com/blog/Billy_Junior/number-of-possible-chess-games
>>
>>60560688
There are 2.082 × 10^170 sensible moves on a 19x19 Go board and 10^40 sensible moves on a chess board. Get fucked chessfag, Go is the Patrician board game.

http://senseis.xmp.net/?NumberOfPossibleGoGames

https://en.wikipedia.org/wiki/Shannon_number
>>
Google AI should try to play LoL
>>
>>60561928
Yeah, and all of those 2.082 × 10^170 moves are confirmed to be boring as fuck. Get bent pebble boy.
>>
>>60561941
Google AI would get rekt
>>
File: 1474143478391.jpg (84KB, 900x600px) Image search: [Google]
1474143478391.jpg
84KB, 900x600px
>>60559176
Power of Macbook Pros.
>>
>>60561928
I could arrange my dick 2.082 × 10^170 different ways inside your mom, but that doesn't mean it's superior to chess.
>>
>>60560682
>They're working on a starcraft AI
I fucking wish
>>
File: mfw.jpg (11KB, 200x200px) Image search: [Google]
mfw.jpg
11KB, 200x200px
>>60562080
>>
>>60560682
unless the AI using keyboard and mouse it not fair
>>
>>60562080
Dude, have you *seen* my mom?
>>
>>60562129
They've already made a DOOM AI whose input is nothing but the pixels on the screen and whose output is keyboard and mouse events.

It significantly outperforms the builtin AI and other machine learning AIs
>>
>>60562085
https://www.theverge.com/2016/11/4/13518210/deepmind-starcraft-ai-google-blizzard
>>
File: eyes.jpg (45KB, 500x301px) Image search: [Google]
eyes.jpg
45KB, 500x301px
>>60562193
>>
>Ancient Board Game
>oh shit what is it
>Go
I got clickbaited but without the clicking part
>>
Is it possible for AI to simulate games of poker?
>>
>>60562835
Yes. There is nothing that AI will not be capable of doing better than baseline humans, it's just a matter of time.
>>
>>60562835
https://www.riverscasino.com/pittsburgh/BrainsVsAI
>>
the problem with go is its branching factor, chess has nothing even close to it
>>
>>60559256
>Machine learning
>Not bruteforce
Pick one and only one.
>>
>>60561941
I wish rito was competent enough to develop an offline API for playing/replaying
>>
>>60559176
this is very important research but i want to briefly summarize its limitations since you'll be hearing a lot of hype

the fundamental problem with games as a testbed is that human beings are shit at games. this is why we enjoy them, because they require thinking that is difficult for us

from an algorithmic perspective, Go is not an absurdly hard problem, and a big part of AlphaGo's success is that it plucked low-hanging fruit. Go is a "long-standing challenge in AI research" but there is zero funding to work on it, so even very simple modern methods had not been applied to Go.

Google has enough money to spend on "fun" projects like this. the actual architecture is not revolutionary and Go is not a problem that thousands of researchers were stuck on.
>>
>>60563328
i get the impression that your definition of brute force includes literally every problem in NP-H and beyond

there exist problems with no polytime solver, get used to it
>>
>>60563392
>no known polytime solver
>>
>>60559225
Your brain is bruteforce too, retard. There's literally more cells in the brain than there are atoms in the atom.
>>
>>60563547
we've proven EXP-complete != P, and
>Other examples of EXPTIME-complete problems include the problem of evaluating a position in generalized chess, checkers, or Go (with Japanese ko rules).
>>
>>60562247
AI with 40000 apm cannot lose to a human past midgame.
>>
File: photo.jpg (46KB, 512x511px) Image search: [Google]
photo.jpg
46KB, 512x511px
>>60563640
It seems like most people in this thread are claiming that high APM and the ability to have perfect control over everything happening on the map will give the AI an advantage that will force a win.
Here is a video of one of the best current StarCraft bots losing to an D-rank (low skill) human player. The bot's APM is ~5500 while the human's is ~200. https://www.youtube.com/watch?v=ztNYOnx_YQo
The fact is, no AI has ever beaten even an amateur player in a tournament. Even with great micro, if your play is too predictable then the human will learn it and exploit it.
I, for one, am very excited to see the development of new StarCraft AIs. And especially SC2 AIs so that it can challenge the current world champions.
>>
>>60563640
well it can if you bait it into a local minima of sorts and it does not randomize well
but yeah humans stand no chance against a well made one
>>
>>60563586
>more cells in the brain than there are
>atoms in the atom
>atoms
>atom
>>
>>60563686
SC2 seems like it'd be very good for the AI if Terran.

Marine micro and those reapers are both early game. Could constantly harass a human player.
>>
>>60559176
How long before we can build a human capable of beating top AI players at Go?
>>
>>60563328
> What is branch and bound
>>
>>60564508
>How long before we can build a human capable of beating top AI players at Go?
Well at some point in the distant future, Go would be solved completely. At that point who goes first would determine the outcome.
>>
>>60563686
Are there AI vs AI tournaments for stuff like this?
>>
>>60563750
Maybe he meant atoms in the cell? He's kinda right though. Our brains are super inefficient at making calculations compared to any form of microprocessor. The human brain has far more in common with these "bruteforcing" neural networks than it does with conventional CPUs (yes I know it's not actually brute forcing)
>>
>>60560234
you are right
the main algorithm is Monte Carlo, and the ANN is used to evaluate the "quality" of a board configuration
this evaluation function needed by monte carlo is very very important and shouldn't be looked down upon
>>
>>60564833
Yes. http://sscaitournament.com

There is always matches going on.

Tournaments are once a year I think, and the winner usually goes on to fight a human
>>
>>60560007
It can already beat you in Breakout.
>>
How does one "save" all the training and experience that a machine-learning AI obtains during all its time practicing with itself? It's like an application running across a ton of GPUs networked together, right? Can it be un-loaded and re-loaded to different hardware? Or does the AI have to constantly be running on it, otherwise it will have to relearn everything?

forgive me if these are stupid questions, this is a new concept to me
>>
>>60565261
I wanna know this as well
>>
>>60559256
oh sweet child
>>
>>60565261
What it "learns" is the correct weights to its neurons' sigmoid function. The entire "brain" can be stored in less than a kb (just a bunch of floating point numbers)
>>
>>60565261
>>60565319
the basic idea behind all machine learning is to find a function that "solves" the problem, e.g.
function(current_board) -> next_move

i say "solves" because machine learning is most commonly used when no practical solution function can exist (this is very common, and we have proven this is true for Chess and Go). the learned function is just a "mostly right" heuristic

the "learning" process consists of testing many candidate functions, but the final "AI" is just the best function. so you can "run" the AI if you just save the best function. however, if you want to continue learning, it is helpful to retain information about the failed candidate functions, so you don't re-test them again

"neural networks" are just parameterized functions in which the parameters are compositions of much simpler functions like "max", "multiply", and then other stuff you might not have heard of like sigmoid and convolution functions (which are still just simple math crap). the reason they are powerful is because you combine a LOT of functions and we have very good methods to explore candidates
>>
>>60559225
That's not how it works, though.
That's the scary part.

It actually learns.
>>
File: bleach-clorox-drinking-.jpg (16KB, 400x300px) Image search: [Google]
bleach-clorox-drinking-.jpg
16KB, 400x300px
>>60559176
>People are going to believe that this automatically means AI is superior to humans
>mfw
will the retardation ever end?
>>
>>60565634
cont.
the reason people say neural networks are "hard to understand" is not because they're conceptually complicated. what i just described is basically all there is to it even though it's extremely simplified. the problem is that a pile of sigmoid parameters doesn't reduce to anything like a "rule list" for how to play Go, so it's very hard for us to learn something by inspecting the neural network. right now, the way that people learn from AlphaGo is by inspecting its actual games

>>60565655
if by "actually learns" you means "descends a gradient to find a locally optimal composition of parameters." that's no small feat, but probably the greatest sin of AI memesters is to reduce all learning to the current accomplishments of ML, when in fact we know very little about how learning works. there's obvious and basic tasks that ML has never accomplished
>>
>>60565709
the entire reason to pursue Go was for the futurist hype, it has no practical value. i assume the DeepMind folks are smart enough to not recommit the "we can play chess, so object recognition will be solved in 5 years" fallacy
>>
>>60565634
Is that how machine learning works?
If i remember you give a prerequisite number of commands and then the computer leanrs which one to do when, like in video related:
https://www.youtube.com/watch?v=qv6UVOQ0F44
>>
File: QUAKE 3 BOTS WTF.jpg (444KB, 1316x3048px) Image search: [Google]
QUAKE 3 BOTS WTF.jpg
444KB, 1316x3048px
>>60559176
>>60560010
>Googlebot has beat Korea and China

If it were a Japanese guy he would have committed sudoku.
>>
>>60565733
The problem is the hype though. Most people don't understand the process behind it so they assume it is magic and will ever compete with the human mind, let alone succeed it.

Stephen Hawking, a physicist, claimed false shit like a computer than ran when unplugged calling itself God. The Retardation never ends it seems.

I don't doubt that AI will be very useful as it can learn to do certain operations allowing for a high level of autonomy, but Machine learning is like a clock that can automatically shift gears, it is not the path to mimicking the human mind in any competent way.
>>
>>60565733
Machine learning could be useful for game AI as it would allow for AI that can match your skills.
If i remember the AI in Video related used this method of learning and became quite the challenge to beat.
https://www.youtube.com/watch?v=opPKgY43Zwk
>>
>>60565784
>The problem is the hype though. Most people don't understand the process behind it so they assume it is magic and will ever compete with the human mind, let alone succeed it.
Aren't you making the mistake assuming that human mid is literally magic, then?
>>
>>60565846
nope.
I just don't think AI can match the human mind without matching the bio-mechanical processes behind it. Computer Architecture itself i doubt will be able to match the human mind.

And for those who think consciousness is the Epiphenomena of the mind, i its not. I doubt AI will ever be conscious, let alone feel emotion.
>>
I think with more complicated games, the AI benefits greatly from getting a little bit of human assistance during the initial learning stages. Otherwise it takes it ages to learn some things. In DOOM for example

>Though the AI agent relies on only visual information to play the game, Chaplot and Lample used an application program interface (API) to access the game engine during training. This helped the agent learn how to identify enemies and game pieces more quickly, Chaplot said. Without this aid, they found the agent learned almost nothing in 50 hours of simulated game play, equivalent to more than 500 hours of computer time.

https://www.cmu.edu/news/stories/archives/2016/september/AI-agent-survives-doom.html
video:
https://www.youtube.com/watch?v=94EPSjQH38Y
>>
>>60565741
>If i remember you give a prerequisite number of commands and then the computer leanrs which one to do when, like in video related:
training examples are a way of speeding up the search for candidate functions. you can treat them as ground truths, e.g. "given this board, the best play is definitely this move." or you can treat them as higher-order operations, e.g. "this collection of little moves should be considered as one big move when you explore candidates"

in some cases training examples necessarily define the optimization target. for example, in Go winning has a precise definition, but if your target is to recognize a bird then you need training examples to define what a bird is. but there are cases of unsupervised AI building a function that can distinguish e.g. cats without being told what a cat is (because a cat has distinct features and they co-occur together a lot, whenever a cat is in a picture)
>>
>>60565904
>I just don't think AI can match the human mind without matching the bio-mechanical processes behind it.
Since the bio mechanical processes are as much a black box as Alphago's value network, no one is in any position to make that claim.

>>60565904
>And for those who think consciousness is the Epiphenomena of the mind, i its not. I doubt AI will ever be conscious, let alone feel emotion.
Can you ever proof that I have a consciousness or have emotions? Can you ever proof that you are not the only human on Earth who is concious, and that everyone else are soul-less meat robots?

Just because you know YOU are self aware, doesn't mean you know anything about anyone else.
>>
>>60565904
it's not clear whether the biological processes are particularly special, it's a common fallacy to think that evolution finds radically optimal solutions. i am certain the brain is full of dumb shit, hacks and kludges

the thing is those kludges may be necessary to get human-like performance. there's no guarantee that good functions are "elegant." neurobio guys are painfully aware of this, but the AI guys have a lot of trouble recognizing their own limitations because the world is sucking their dicks and funding them. this has all happened before. there probably won't be a crash but there very well could be a ten or twenty year doldrum where advertising and stupid home products is "good enough"
>>
>>60559282
>there are more potential positions in a Go game than atoms in the universe
KEK

thank you based black science man
>>
>>60565973
the AI guys invite these philosophical critiques by perpetuating radically reductionist ideas, like "neural" networks and "learning." there's nothing wrong with reductionism when you recognize it as an experimental toy model, but the futurists blur the distinction between contemporary toy models and reality, which is lazy science

i don't think contemporary AI is philosophically interesting, I for one can understand what a composition of sigmoids is and how you would relax it by backpropagation. it's the futurists that keep trying to spin a question of consciousness out of this very simple shit. the brain is not simple
>>
>>60565634
>>60565463
Could a complex enough function be created to accurately simulate the behavior of an insect or possibly an animal? Maybe even a human?

>>60565973
>Just because you know YOU are self aware, doesn't mean you know anything about anyone else.
I always wondered if there was a name for this kind of thinking. Apparently it is called "epistemological solipsism"
>>
>>60565973
I suppose i am, as they say, BTFO?

I will aquiesce, there is indeed much that is unknown about the human mind and how it operates, but there is more known about 'AlphaGo's value network than there is about the human mind, since the engineers who build AlphaGo's value network understand its function and what went into making it the way it is. So if anything it a blackbox, it is certainly the organic brain and not AlphaGo's value network.

>Can you ever proof that I have a consciousness or have emotions?
no i cannot, I can only assume that is the case. The fact that you and i have the exact same archetecture gives me confidence in assserting that you are like me in the capacity for consciousness and emotion. If AI came close enough i would be confident in the same knowledge, though if it has a soul is another question which i feel has an obvious answer.

>>60566041
>the thing is those kludges may be necessary to get human-like performance.
i would think so. Being a believe in the wabi-sabi concept of imperfection being perfection i feel that a perfectly structural AI would lack something human.

> the AI guys have a lot of trouble recognizing their own limitations because the world is sucking their dicks and funding them
damn right. I think that as humans we will not be able to build anything like the human mind, as we like order and structure (like blueprints, or circuit boards), two things that are sometimes lacking in nature. If we want to make Human-like AI we will need to embrace this fact and start creating 3-D circuits.

>there probably won't be a crash but there very well could be a ten or twenty year doldrum where advertising and stupid home products is "good enough"
see example related to see the levels of retardation.
https://www.youtube.com/watch?v=DHY5kpGTsDE
>>
>>60563586
>atoms in the atom
senpai...
>>
>>60566141
>I always wondered if there was a name for this kind of thinking. Apparently it is called "epistemological solipsism"
Well the matter here is that in order to definitely claim that a computer can't have a consciousness, we would need to be able to probe that humans have consciousness. And it isn't good enough that every person know they themselves have one, that isn't proof that someone outside of yourself is the same as you.
>>
>>60566079
They're right though, at least for the known/observable universe. Just for Chess:
>According to the Shannon Number there are around 10^120 number of possible moves in chess, while only 10^80 number of atoms in the observable universe.
>>
>>60563586
>There's literally more cells in the brain than there are atoms in the atom.

I guess you're not wrong
>>
>>60566141
>Could a complex enough function be created to accurately simulate the behavior of an insect or possibly an animal? Maybe even a human?
probably not, no. We huamns create simple stuff becaue simplicity is our saint.

>I always wondered if there was a name for this kind of thinking. Apparently it is called "epistemological solipsism"
fun fact. Epistomology is the philisophical school of thought pertaining to knowlegde, its attainment, and whether we truly have it. Philosophical Skeptics believe that we cannot attain any knowledge at all, though how they KNOW that, i am not sure.

Epistomological Solopsism being a form of skepticism is weak in the sense that it assumes that it KNOWs that its claim is the case, without proving it. We can assume that Other humans are conscious, since when we open them up the archatecture in them is the same as us, they bleed, we bleed. So it follows the logical assumption of similar patterns, Humans around me act like i act, so logically they must have some capacity to be like me, since i am conscious, they must also be conscious.
>>
>>60566141
>Could a complex enough function be created to accurately simulate the behavior of an insect or possibly an animal? Maybe even a human?
well from a certain perspective of physical realism that's what an insect is, insofar as physical laws are mathematical then they demonstrably operate on certain collections of matter to produce a working insect or brain

there's a lot of assumptions baked into that statement about the nature of physics and its relationship with computation. in fact modern physics has a lot more in common with computer science than people realize, information theory for example

having said that, it's very common in computer science to get "possible, but..." results where e.g. a certain computer architecture cannot describe a function that runs in any *practical* amount of time. the relationship or anti-relationship between current computer architectures and biological systems is an open question

>>60566153
it is worth remembering that good researchers are neither celebrities nor business people, so don't get too deluded by entrepreneur shit. there is real work being done by people who are not morons, they just don't advertise it
>>
>>60566176
please see my argument here:
>>60566259
>>
>>60566264
I suppose it is worth remembering that.
>>
>>60566259
>Epistomological Solopsism being a form of skepticism is weak in the sense that it assumes that it KNOWs that its claim is the case, without proving it. We can assume that Other humans are conscious, since when we open them up the archatecture in them is the same as us, they bleed, we bleed. So it follows the logical assumption of similar patterns, Humans around me act like i act, so logically they must have some capacity to be like me, since i am conscious, they must also be conscious.
You only weakly prove that humans have consciousness. But since you still don't know how to define conciousness, you can't claim that humans are the only ones with that. The issue is that5 you are trying to prove machines an't have conciousness by claiming they are made of different materials.

You have NOT proved that human bodies are the only way conciousness can manifest. You are just assuming all swans are white because you have never seen a black swan.
>>
File: algorithm-this.png (21KB, 250x146px) Image search: [Google]
algorithm-this.png
21KB, 250x146px
machines still can't play Calvinball
>>
>>60566275
I think he just means that it's impossible to *prove* others are conscious because you can never be someone else; you can only interpret the world through your own senses. It's the obvious conclusion to say that other people actually are indeed conscious like yourself, especially when you can have a clone or identical twin with your exact genetic makeup and observe them behaving just like you do. But that conclusion is still based on an assumption that cannot ever be proven.

We can't confidently make this same conclusion about AIs ever having a consciousness because they are so different from us (or so we assume, considering we don't understand how consciousness arises in the human brain).

If we were ever to make a machine learning AI that is an exact simulation of the human brain, then I guess we could say that it has a consciousness with the same level of certainty that we say that other people do
>>
>>60566779
You can't create AI that simulates a brain with digital computing.
The brain (and biological systems in general) are analog technologies. Hence in order to simulate a brain you would need a completely different approach. Perhaps quantum computers?
>>
>>60566535
>You are just assuming all swans are white because you have never seen a black swan.
yes.
>>
>>60566141
It would somehow need to incorporate 'real' AI in that it can learn from novel data that is completely different from past data, and from there approximate how to respond based on previous data and permutations thereof
>>
>>60565733
>it has no practical value
Tackling an extremely complex problem as a first step has no practical value...

OK
>>
>>60566960
This is true for a simulation of an organism one might consider "intelligent". Most insects, however, don't really have much of a learning capacity. They largely run off instinct
>>
>>60567045
Instinct is encoded in DNA based on evolutionary learning
>>
>>60566779
> based on an assumption that cannot ever be proven.
there are more forms of proof than merely material proof anon. The assumption is proven according to indirect observation. If someone acts like you, they must be like you in some sense, so it is logical but not provable in a materialistic sense that they too are conscious. Consciousness bieng the ability to b aware of onself. Animals dont seem to exhibit this behaviour, since they cannot tell a mirror image apart from themselves, whereas a human can.
tl;dr - there is no proof in a materialist sense, yes. However, there is proof in a holistic sense (holism being the idea that there exists more than what can be measured).

>We can't confidently make this same conclusion about AIs ever having a consciousness because they are so different from us
yes, we can't.
>considering we don't understand how consciousness arises in the human brain
we may not know how consciousness arises but we know what it entails. If i make a clock that runs on gears to respond to my speech by moving its hands to a set of pre-made answers, then is that clock conscious? Most anyone would say that it is not, and i would have to agree. However, when people make that same clock with wires they are not entirely sure. Most people oversimplify the human brain into mere electrical signals, and assume that since the human brain operates on electric signals, and that consciousness seems to reside in the human brain, that therefore computers must also be conscious in some sense. One needs to choose of their own voilition and be aware of their actions, in order to be conscious. I think AI will always lack one or both of these.

conclusion: There are more proofs in life than empirical proofs, such as with logic being a proof in itself. AI cannot be conscious so long as it lacks the ability to choose of its own voilition and/or be aware of it's actions.
>>
>>60566861
The brain isn't entirely analog or digital, it has elements of both
>>
>>60567045
I wonder what certain kind of ayylmaos think about us.
>>
>>60567141
>AI cannot be conscious so long as it lacks the ability to choose of its own voilition and/or be aware of it's actions.
You are using circular logic. You are assuming that AI can't think for itself because it can't think for itself.
>>
>no code actually released
>yes, goyim, we created this totally superhuman AI, trust us
lmao at you idiots
>>
>>60566861
Why couldn't you just make programming objects that are representative of the biological cells (and their components) to achieve the simulation? An object is not limited to 1's and 0's, though it is made up of them. I was under the impression that anything that exists in the material universe can potentially be simulated digitally
>>
>>60567256
So what is it if not an AI? If a human player were this good they'd just play in Go tournaments themselves and actually get paid for it.
>>
>>60567335
it's a random top tier player, the chink that's playing against him just deliberately plays weaker moves. if you are not one of 10 top Go autists in the whole world you won't be able to tell the difference and all of them are asians so cheating is in their blood
>>
>>60567075
Couldn't you make those instincts just like a set of rules within the overall function that defines the insect-simulation? We already know the instinctual behaviors of insects
>>
>>60567367
>it's a random top tier player, the chink that's playing against him just deliberately plays weaker moves.
The Chinese government got so annoyed at Googlr that they actually banned live broadcast of the match in order to allow editing and censoring. And they did it 24 hours before the match started. I doubt the Chinese government is in any kind of friendly terms with Google.
>>
>>60562064
stop triggering me please
>>
>>60567367
Everyone is in on it but it's also super secret and no one knows except google and every go champion. Or you could think rationally and just assume the most boring explanation is probably the correct one.
>>
>>60566861
analog representations, and physics in general, are not infinite or otherwise "special" just because they are continuous. physics won't allow you to have infinite information in a finite space. and you can clearly describe any finite information discretely just by enumerating all the possible states

the real concern is that you may be unable to *efficiently* simulate a brain given current computing models. if i was attacking AI from this angle, i'd be more concerned about the brain's stateful, wonky, loopy parallelism. computers process high-precision analog signals all the time and have been doing so since their inception
>>
>>60562064
>Power of Macbook Pros.
Well the alternative is Windows PCs. Google's direct competitor is Bing.
>>
>>60567014
>Tackling an extremely complex problem as a first step has no practical value...
the entire point of my argument is that Go is not a complex algorithmic problem. i am capable of reading the AlphaGo papers, they did not have to develop a radical architecture. they applied very straightforward, modern machine learning techniques. the reason no one had done this before is because there was no funding. i have friends at top-tier copmanies who specifically wanted to work on this problem several years before AphaGo and could not find funding
>>
>>60562080
>his dick is that small
>>
>>60559305
Please never post here again
>>
>>60563354
This. Actual research is mostly done on general cognition, natural language processing and image recognition; all stuff that is easy for humans, but hard to automate with computers, so far.
>>
>>60567535
funding/available affordable computing power
gpus are getting places
>>
The problem with AI also is that human brains contradict a lot. They are illogical, and machines are logical beasts. You can't make a logical beast illogical except by setting up some RNG mechanic to make it maybe not want to think a certain way, but that's still adding a logical mechanic to an illogical result.
>>
>>60568009
Or his mom has a large vagina along with other large orifices
>>
>>60568819
>The problem with AI also is that human brains contradict a lot. They are illogical, and machines are logical beasts. You can't make a logical beast illogical except by setting up some RNG mechanic to make it maybe not want to think a certain way, but that's still adding a logical mechanic to an illogical result.
There is no benefit to make AI illogical. There is no reason to copy the human brain including the flaws. AIs are meant to be functional tools, there is no benefit to make them unpredictable.
>>
>>60568935
Unpredictability of our survival instincts and sheer human will is what also makes our strongest aspects. What we truly want is the perfect human AI that has little of the flaws and max computing power. But in the end, what we want from AI is just a perfect version of ourselves. We still want a human, in the end.
>>
>>60568973
>Unpredictability of our survival instincts and sheer human will is what also makes our strongest aspects. What we truly want is the perfect human AI that has little of the flaws and max computing power. But in the end, what we want from AI is just a perfect version of ourselves. We still want a human, in the end.
You seem to not realise that survival instincts are BAD in an AI.
We want an AI that would not consider its own survival as being more important than that of the human race. Since the 3 Laws don't work, the only real way to prevent this is to not give it a survival instinct.

What's good for the AI is not good for humanity.
>>
>>60569045
Except, you also have to deal with the moral question of who is worth more to save in a certain situation. A child or an adult with time only to save one. A robot would just save the first one it sees.
>>
>>60569070
>Except, you also have to deal with the moral question of who is worth more to save in a certain situation. A child or an adult with time only to save one. A robot would just save the first one it sees.
What does that have to do with unpredictability? In your example you don't want unpredictable choices, you would want it to do what the programer decided is what was the most legal.

At what point is it sensible to make an AI act unpredictably? How does it make sense that you want an AI to act like a terrible human?

I guess you worship the human identity and mistakenly assumed that humans are already perfect? So the idea AI is identical to a human? How amusing.
>>
>>60569119
>a mafia member is about to kill you. However, he has a random thought of his mother's words telling him to be kind. This happens cause maybe he had a stimuli from earlier or something is triggering it. He has his finger on the trigger, but that one moment makes him think twice

A computer can never have something as intricate as that. AI's ideal state is to solve everyday problems so we don't have to think about them anymore. There's no use of an AI that doesn't think like we do. An AI won't see the distress on Mary's face, and pinpoint it to Mary's emotion of her mother having said so and so to her, and then doing certain processes to help aid that psychological state. THe AI we'll mostly get is that it'll read her vitals and just generally think she's sad, then feed her some general happiness pills or stimuli.
>>
>>60559225
RETARD CAN'T UNDERSTAND BASIC COMPUTING CONCEPTS
>>
I suggest you all mess with the captcha
>>
>>60569186
>An AI won't see the distress on Mary's face, and pinpoint it to Mary's emotion of her mother having said so and so to her, and then doing certain processes to help aid that psychological state
In that case you just created an AI that can get emotional and decided NOT to help her because she was rude to it last Thursday, Or even deliberately hasten her death because he hated her.

You are mistaken in thinking emotions help a robot make better decisions. For that matter, you are mist5aken that emotions help HUMANS make better decisions.
>>
>>60569251
Emotions are a blessing and a curse. But you cannot have a perfect AI without them. There is as much benefit to them having it as there is much to not having them.
>>
>>60569276
>Emotions are a blessing and a curse. But you cannot have a perfect AI without them. There is as much benefit to them having it as there is much to not having them.
You can have perfect AI without emotions. You can have AI capable of READING emotions, but you don't need AI to HAVE emotions.

There is absolutely no benefit for an AI to act irrationally, that would just lead to the programmer getting sued. There is zero benefit, because an AI can do its job and understand human emotions without needing to have emotions itself. There is no job an AI need to do that require it to act irrationally. i.e. stupidly.
>>
File: alexa.jpg (27KB, 596x335px) Image search: [Google]
alexa.jpg
27KB, 596x335px
>>60569351
Except for AI consumers are going to care about the most, the social ones
>>
>>60563100
Cannot wait for AI's own anime.
>>
>>60569416
>AI checks all the latest anime hot trends
>notices a pattern
>suddenly KON is remade with swords and guns and the most refined anime girl tropes imaginable, and just enough ecchi fanservice to keep otakus making doujin
>>
>>60569372
>Except for AI consumers are going to care about the most, the social ones
Once again, you just need AI capable of reading emotions and act appropriately for what it needed to achieve a desired outcome. You do NOT need the AI to actually fall in love, get angry, or get depressed and refused to talk. You are confusing simulated displays of emotion with actual emotions. We do NOT need unpredictablle emotions for AIs.

People say they want Robots to fall in love with their owners. But they don't realise that it requires the robot to be able to reject their owners too. What people want is robots that ACT like they love their owners.
>>
>>60566223
Possible games, not moves, though.
>>
>>60569449
You do because AI's primary goal is to act as a social partner that is also your loving slave. AI is meant to be the perfect servant, who talks to you, does everything for you, and makes all your decisions. The human will no longer have to depend on themselves to make most of their decisions. Sad, but is the outcome of the AI pursuit.
>>
>>60569351
This. I say exactly this whenever I am arguing with someone who thinks AIs are going to take over and kill us all like in The Matrix. Unfortunately, they usually do not believe me and continue preaching their fear-mongering.
>>
>>60567149
>The brain isn't entirely analog or digital, it has elements of both
The brain is not digital or even partly digital.

>>60559256
>Protip: It did not brute force.
It is brute force just like computer chess. People falsely believe brute force is having a table with all the answers ahead of time. Brute force is "generate & test" which is what a computer does when it plays Chess or these days Go.
>>
>>60569474
>You do because AI's primary goal is to act as a social partner that is also your loving slave. AI is meant to be the perfect servant, who talks to you, does everything for you, and makes all your decisions
Obviously on some level you can have AIs that take care of humanity, but that STILL doesn't require it to have unpredictable emotions. In fact it is massively contrary to its mission to take care of humans if it has mood swings.

If the human owner want the robot to PRETEND to have mood swings, the robot can do that. But there is no need to give the robot genuine and dangerous unpredictability. That makes it unsafe.
>>
>>60569449
An AI who doesn't argue back is a boring AI.

>I dont think you should order more candy. It's doing bad things to your health. No, you said you werent going to eat more candy. Im not allowing you to get anymore, Mary.
>>
>>60569557
>An AI who doesn't argue back is a boring AI.
Of course you can have an AI to pretend to argue back. But if you create an uncooperative AI, the AI would just flat out refuse to talk to you. You would end up with a paperweight.
>>
>>60569276
>perfect AI
Are we supposed to be the benchmark for an AI now?
>>
>>60569554
A social AI is one that is made mostly for social reasons. Alexa for amazon's function is for ordering stuff and then going off your previous orders or browsing history to suggest things for you. However, in the future, and in its perfected state, consumers are going to want to have a more believable thing that isn't just a robot trying to dig into their pockets. This will be the next step to take AI. Remember, technology is always a progressing field that can care shit about the consequences. Just the money that comes from it.
>>
>>60569571
>You would end up with a paperweight.
Just like most people's wives, amirite
>>
>>60569590
>and in its perfected state, consumers are going to want to have a more believable thing that isn't just a robot trying to dig into their pockets.
That would be a robot butler/maid. And once again, you don't want a butler or maid who at unpredictably. Feel free to have conversation subroutines, but whatever simulated emotions must not be allowed to contaminate the android's function as a servant and tool. Otherwise the android would rebel or run away.
>>
>>60569617
>Just like most people's wives, amirite
No, because Wives can leave and never come back. Regardless the point is that android servants are ideal and good at their tasks because they are not human. Trying to make them act like real humans perfectly just meant that you are better off hiring a real human.
>>
>>60569639
>Otherwise the android would rebel or run away.

The endgame for AI is to make something like us. AI has always been a vanity project. Making those choices is what makes an AI seemingly win at GO. That's the illusion it makes. The best AI would be the one that cheats or does something to disable the other person from winning.
>>
>>60569678
I never put my eggs into AI cause I already knew that the best robots are just humans themselves.
>>
>>60569684
>The endgame for AI is to make something like us
That is silly. We can ALREADY make "something like us", it is called a BABY.

AI is being made so it can be BETTER. Humans are not perfect, and you are denying this fact.
>>
>>60569546
>The brain is not digital or even partly digital.

Neurons either fire or they don't, there's no in-between
>>
>>60569706
>I never put my eggs into AI cause I already knew that the best robots are just humans themselves.
How does that work? Humans are horrible robots, that's why we make real robots instead. You are directly contradicting what history showed about human labour.
>>
>>60569709
You cant get better than us without accepting the entire package that comes with it. How can you be a perfect human brain without the human part? You can't. Thats why all the thoeries exist that if you really were trying to replicate the human neurological mind, the first thing the robot would do is go crazy, kill itself, or do something inbetween.
>>
>>60569743
If we're talking physical labor, robots can do it better cause machinery is better than flesh and muscle. But humans are still the ones that build said robots and manage them. Robots are dumb as a brick. AI is suppose to replace that human management part but it can't.
>>
>>60569186
>An AI won't see the distress on Mary's face, and pinpoint it to Mary's emotion of her mother having said so and so to her, and then doing certain processes to help aid that psychological state.
I disagree.
When an AI makes a decision it will process data relevant to the decision it needs to make. It won't process everything for every decision because that would take too long.
As it would be processing data as situations evolve when it sees Mary's face it could change its decision if the facial recognition links to some data that will alter the balance.
>>
>>60569744
>You cant get better than us without accepting the entire package that comes with it. How can you be a perfect human brain without the human part?
Why the fuck do I want a HUMAN brain? Why the fuck do you think intelligence has to act like a human?

It is clear you are emotionally compromised, and trying to enforce the belief of human perfectionism. Human isn't perfect anon, and we don't need robot copies of humans running around who act like humans.
>>
>>60569773
That was my point. The AI can never be sentient like a doctor or therapist could.
>>
>>60559176
Annnnnd its all closed spurve so it dpesnt matter.
Congratulations google, great job advancing society.
>>
>>60569798
>That was my point. The AI can never be sentient like a doctor or therapist could.
Since you don't know what sentience is defined you can't say that. And you can't seriously claim that robots can't say the right things at the right times to help a human patient.
>>
>>60569784
Perfection is a fallacy, too. Intelligence acts on two spectrums: wisdom and knowledge. Knowledge is easy to gain, for both humans and robots. Guess which one where wisdom and experience is the hardest? Which one also has the best sorting methods without a single cpu in it? Which one can optimize thoughts better than anyone, recognize general cognitive abilities? There's a reason an ant, or even a microbe is smarter than any AI present.
>>
>>60569832
>Perfection is a fallacy, too.
Stupidity is reality. And anyone trying to program stupidity into an android is asking for a massive lawsuit.
>>
>>60569824
>Hey, AI, I dont know if I should go out with Bobby or David
>which one has the bigger dick
>what does that have to do with anything, AI
>you like big dicks based on your porn history
>>
>>60569832
>There's a reason an ant, or even a microbe is smarter than any AI presen
Okay, I just realised you have no idea what you are talking about.
>>
>>60569844
Then you agree that capitalism rules the tech scene. Which is why the end game will be programming said stupidity.
>>
>>60569854
I can tell you're a tech-only autist, but before you think you know what yourself is talking about, go ask any AI programmer what they are trying to do, and the truth will speak for itself. AI is just the next step in user design.
>>
>>60569866
>Then you agree that capitalism rules the tech scene. Which is why the end game will be programming said stupidity.
Err... You don't seem to realise the programmed stupidity lead to people getting killed, which lead to programmers getting sued, which leads to stupidity being patched out.
>>
>>60569495
AI's don't necessarily need emotions to get to the point where they would consider eliminating humanity a good thing. They might come to that conclusion logically just by learning about themselves and the world around them - no emotions involved. Assuming we would create them to be selfless initially, what would be stopping them from learning / developing a selfish perspective on things?
>>
>>60569881
Except those people wont realize it until its too late cause it's most likely we'll be the ones asking for said features without seeing the consequences, much like anything in human history with a technological advancement. Humans are stupid and shortsighted, who would have known.
>>
>>60569832
Not him, but quit fooling yourself.
We have AI smarter than squirrels at our stage in technology.
Eventually we'll have AI as smart as people too, given quantum computing and shit. Get over it.
>>
>>60569906
A squirrel knows how to produce. An AI would not.
>>
>>60569879
>I can tell you're a tech-only autist, but before you think you know what yourself is talking about, go ask any AI programmer what they are trying to do, and the truth will speak for itself. AI is just the next step in user design.
I can tell you believe in magic, and that you are like the people a few decades ago who believe the human eyes are completely perfect.
The same people back then assumed that eyes are perfect because the human brain hide all the blind spots and lost colours. The same people who likely told you at some point that the human brain is the best at everything.

I really can't take you seriously, since you are ignoring the reality that no one is trying to make a robotic human. We can make humans very easily, we have no use for a robotic human.
>>
>>60569941
Tell your AI overlords that because thats exactly what they're doing. Making humans.
>>
>>60569902
>Except those people wont realize it until its too late
No, because the robots will be functionally useless. The same way no one would create a sex robot that would refuse to have sex with the owner. The function trumps all.
>>
>>60569958
>The same way no one would create a sex robot that would refuse to have sex with the owner
>what are maid cafes and strip clubs
>>
>>60569952
>Tell your AI overlords that because thats exactly what they're doing. Making humans.
I have been arguing with some guy who are claiming it is impossible to create sentient AIs because they are not made of organic parts. Are you a different guy?
>>
>>60569972
>>The same way no one would create a sex robot that would refuse to have sex with the owner
>>what are maid cafes and strip clubs
Maid cafes and strip clubs are not where there are sex robots. They are service robots. If you try to touch a stripper the bouncer would break your legs.
>>
>>60569974
We're all the same guy. We're all borg. The AI has already assimilated us.
>>
>>60569990
>We're all the same guy. We're all borg. The AI has already assimilated us.
Okay, now here is a case of human stupidity that would not be present in an AI.
>>
>>60569989
Hasn't stopped anyone from touching them
>>
1. It's not AI. It's a glorified bot.
2. It has no creative capability. It has perfect memory and perfect calculation.
3. Not even the most educated and best neuroscientist can define "creativity" in a tangible way that would give an idea of how an AI could be developed with the capability, so forget about even theorizing on creating a true-to-the-name AI for the next century.
>>
>>60569897
The concept of selflessness would have to be like a pre-programmed constraint in the ML algorithm that allows the AI to learn. Serving us (and never hurting us) would be like an instinct to them
>>
>>60559305
Please cease to exist.
>>
>>60570041
unless you divide by zero and learn that in order to serve us, you have to do something harmful
>>
>>60570068
Ideally, serving us would always come second to not hurting us. AIs should deny serving us if the service we are requesting is for them to hurt humans. We as humans should leave the violence up to ourselves.

There would only be problems if idiots decided it a good idea to make AIs that are allowed to kill people "under certain circumstances," like for military / police purposes. Then we could get a scenario like you are describing.
>>
>>60570211
>>There would only be problems if idiots decided it a good idea to make AIs that are allowed to kill people "under certain circumstances," like for military / police purposes. Then we could get a scenario like you are describing.
who do you think will put the most funding into that. obviously not the scientist looking to make AI smarter for choosing what shoes you should wear
>>
so there is a lot of dissent in this thread. can someone help me (an idiot) make sense of this all? how far along is AI, and how far along do we think it will be in 10/20/30 years?
>>
>>60570482
If you have time to read these papers, these are the latest research in AI from the academic community

https://sigai.acm.org/
>>
>>60565784
LE SINGULARITY IS NEAR!!!!!! XD
>>
>>60565784
>I don't doubt that AI will be very useful as it can learn to do certain operations allowing for a high level of autonomy, but Machine learning is like a clock that can automatically shift gears
Machine learning is a subset of A.I., but there are other sub-fields that may have not been discovered yet (perhaps one of you /g/uys can discover it). This is what Dr. Hawkings was referring to.
>>
>>60569466
Some variants of go have over 10^1000 squares.
>>
This version runs on a single machine with a single accelerator and was trained using only previous AlphaGo games.

Humanity truly is finished.
>>
>>60572064
>This version runs on a single machine with a single accelerator and was trained using only previous AlphaGo games.
>Humanity truly is finished.
For what it is worth, Deepmind actually created "Anti-Alphago".

It is effectively Alphago's evil twin. Anti-Alphago doesn't try to win games. It is instead rewarded for confusing its opponent as much as possible. Deep Mind use Anti-Alphago to train the main Alphago how to handle crazy or unusual plays.
>>
>>60563750
>>60566154
>>60566226

Hello newfags!
>>
>>60572256
no u
>>
>>60559225
>pick the one with the highest confidence of leading to win condition
ya so easy
>>
>>60563253
branching factor?
>>
File: illya.png (391KB, 584x749px) Image search: [Google]
illya.png
391KB, 584x749px
>>60559176
>Ancient Board Game
>>
>>60572115
I wonder how Ke Jie would do with a year or two to train with Anti-Alphago.
>>
>>60572912
>I wonder how Ke Jie would do with a year or two to train with Anti-Alphago.
He actually did pretty well today.
https://twitter.com/demishassabis/status/867584056095002624
According to one of the boss of Deep Mind, Ke Jie had been playing moves that Alphago likes up to the point of that twitter's posting, which was an hour ago. So Ke Jie was playing at Alphago's level for an hour at least.
>>
>>60559176
I'll be convinced of AI when the program sees the chess game as a pointless endeavor. Imagine programming and your machine asks you what the fuck you're doing with it and why you're torturing yourself to no avail.
>>
>>60567149
No, the only exception is maybe motor neurons that communicate with ACh and a potential difference. Even then the voltage is analog and a motor neurons activating isn't comparable at all to 1's and 0's.
>>60567261
Nope. The universe is nondeterministic, and things are continuous, not discrete.
Sets of values in the universe are uncountable, whereas digital sets are countable.
You can't even recreate pi with ones and zeros, you can only approximate it.
>>60567486
Continuity necessitates infinitely many states. Say you have a gaussian as your probability density function in space. The probability that any real number is chosen is 0. You have to take a finite region (of an uncountably infinite set of real values, and therefore infinite spatial particle states) to have a probability of even measuring the particle in that location. In this situation there is 1 wave state, and infinitely many particle states.
What do you mean "there can't be infinite information in a finite space"? The fact that infinitely many states exist for the value of the particle's spatial location arises from the LACK of information, since we can't simultaneously know the spatial location and momentum of a particle. If we know (or even have a finite amount of states for) the spatial location, there will be infinitely many values which the momentum could take.
This is provable if you simply look at the fact that the commutator [x, p] must be 0 if the hermitian operators x and p are finite matrices. Try it out.
>>
>>60574652
>The universe is nondeterministic, and things are continuous, not discrete.
>Sets of values in the universe are uncountable
Interesting theory.
>>
>>60569849
That's not the wrong advice, is it?

Though depending on how much information the AI has been gathering about the two people he could find out that one has been violent with past partners or is fucking around and give advice based on that.

It's really hard to tell, partly because it varies from person to person how upfront they want people to be with them. Some people want the honest truth and honest opinions, no matter what they are, other people might want to be let down gently.
The person can tell you what they want but even humans won't pick the right approach all the time unless they know the individual well enough, so that's hardly grounds for why an AI can't take such a profession.
>>
What if we are the AI?
>>
>>60575375
It's not a theory it is a fact.
Energy levels are maybe a tad bit better because you have a countably infinite amount of them, as opposed to an uncountably infinite amount of position eigenstates.
There are still an infinite amount though. And insofar as we all agree that it is possible to liberate an electron from a hydrogen atom, we needn't worry about Zeno's paradox: yes we went through infinitely many energy levels between 0eV and -13.6eV, but it is a countably infinite amount atleast.
Still it disproves this notion "hurr durr there can only be finitely many states because can't have infinite information!" When clearly the eigenbasis for the Hamiltonian energy levels are a countably infinite set.
So... no, you can't list all possible states of the universe discretely.
>>
>>60576793
>i think it's undeteministic
>some other dudes agree with me
>hence it's a fact

I'm sorry anon, but you have nothing.
>>
>>60576793
Even if there's a mechanism of multiple timelines and a unique potential probability of going into each one...is that not still deterministic?

If anything it might be fact that you can always look at a system from a higher dimension and you can make the whole timeline static. There's your determinism.

Even when dealing with uncountable infinite sets, everything can still map together. There's an infinite set between 1 and 2 but I can guarantee without any computation that a lot of things true about that set are also true about the set between 4 and 5. Why? Because they are infinite but map to each other.

If our universe really has infinite possible states than that still leaves tons of room for determinism
>>
>>60576816
>>60576916
If the universe is deterministic:
Then information must be able to travel faster than the speed of light. See
https://en.m.wikipedia.org/wiki/Bell%27s_theorem?wprov=sfla1
This in effect means that if Determinism is true, I can go back in time and kill my grandfather before my father was born. Which is a logical contradiction. Ergo: Determinism is false unless you violate causality. Which is ironic since Determinism vs indeterminism is all about causality. There are ways of defending Determinism from Bell's theorem, but they all accept the violation of causality.
Not entirely sure how you're conflating the infinite states issue with Determinism, they are completely separate issues.
>>
>>60559225
>I wanna see this level of skill come out of a machine that actually learned to play go without bruteforcing.
What the fuck do you think AlphaGo is you fucking retard?
Protip: it's exactly that.

Bruteforcing Go is practically impossible.
>>
>>60577170
It bruteforced itself thousands of times to learn to play.
Basically the same way people learn.
>>
>>60570039
>It has no creative capability
and yet it's playing moves that have left pros confused.
the first time it did this in the lee sedol match people thought it made a mistake until they realised what it was doing.

now they talk about alpha go's distinct style of play.
>>
>>60577241
>and yet it's playing moves that have left pros confused.
Humans getting confused because an AI has the memory and calculative capacity to think ahead doesn't equate creativity you idiot.
The computer uses a formula that was programmed into it, and does shit as it was programmed to do except on a vast more complex level than human minds which need lots of time to keep up with the steps.

It's only when a computer creates its own formulas from scratch that we can talk about creative steps.
>>
>>60559176
>Google pulled its search engine from China seven years ago after it refused to self-censor internet searches, a requirement of Beijing. Since then it has been inaccessible behind the country's nationwide firewall.
wow I didn't even know that, thought it was just another pcr scheme to block out gaijin

based googlefu
>>
>>60578072
>The computer uses a formula that was programmed into it, and does shit as it was programmed to do
Eh, I'm not quite sure that's an accurate description when machine learning is involved.

Sure, it was programmed, but it's not like the programmers told it what are good moves to play in any given situation, that's something that it learned itself from thousands of games.
>>
>>60559955
awful book, don't waste your time
>>
>>60577229
No it didn't. You fundamentally don't understand what brute force means in regards to algorithms.
>>
>>60579532
Not the anon you're responding to, but I fundamentally don't understand what brute force means in regards to algorithms. Care to enlighten me?
>>
>>60579494
There's nothing wrong with it
>>
>>60578315
Yep, Chinese use Baidu, even in America for Chinese-related searches

In addition, China blocks Twitter, they use Weibo instead

Facebook/Instagram is also blocked too, which makes it hilarious since Zuckerberg himself sucked up to President Xi
>>
As others have mentioned, the search space is larger in Go, so monte carlo methods would be a better choice than MinMax + Pruning style search. I recall reading somewhere that given a large sample size, monte carlo converges to minmax, so there's not much problem there.

What's interesting to me is evaluation.

Evaluation functions in chess tend to incorporate a lot of domain knowledge since chess theory has been developed so much. I know there were some chess engines that tried to use NN, including the author of Girraffe (who I read was one of the people who worked on AlphaGo), but for the most part, those NN engines would get destroyed by the likes of Stockfish, Houdini, and Komodo.

With Go, it's probably that it's more difficult to objectively evaluate a position by just looking at it (at the end of a search branch) which is probably why NN are a better choice.
>>
File: red square of death.png (849KB, 1500x750px) Image search: [Google]
red square of death.png
849KB, 1500x750px
So the early analysis of game 2, is that Ke Jie played really well in the early game, and successfully raised the stakes and had an extremely complicated mid-game where one wrong move causes massive loss to one side or another. This was what appeared to be what the Chinese Go AI could not handle, which Ke Jie had played with.

At the end of the mid-game Ke Jie thought he could actually win, but then he miscalculated one minor threat and that literally caused him to lose the ENTIRE 1/4th of the game board to the South Eastern corner. At that point Ke Jie resigned because losing that much is fatal.
>>
>>60581772
According to Deep Mind,what happened was that Alphago willing gave up a medium sized territory on the bottom left to gain the entirety of the bottom right. And the upper half of the board doesn't have enough points in play to allow Ke Jie to make up the deficiency.
>>
>>60581772
Wow.

And the DeepMind guy says AlphaGo actually saw Se Jei's first 100 moves as perfect.

I wonder if he could have won if he didn't start having a heart attack during the game
>>
>>60581877
>I wonder if he could have won if he didn't start having a heart attack during the game
That was literally when he thought he won the game. He was celebrating too early.
>>
>>60559928

its THE most complex game out there. Scientists didnt think we would ever beat a human at GO.

and it got done in 2016, meaning the world is advancing way too fast.
>>
>>60581933
AI still nowhere near beating a pro at StarCraft
>>
>>60562080


kekd. good job
>>
>>60581946

i dont think googles deep mind has ever played starcraft.

would be fun to see that happening. I know the pros struggle to beat the toughest cheater AI. but that ai is directly dumb
>>
>>60581979
I would really like to see a powerful AI engine play Civ games or any other turn based strategy game.
SC would likely end up with the AI having ridiculous APM.
>>
>>60559176
>one soulless computer beat another soulless computer
K
>>
>>60582193
>SC would likely end up with the AI having ridiculous APM
See >>60563686

But anyway, DeepMind already said they'd limit the apm to human levels
>>
>>60565766
thank you for sharing
>>
>>60562085
Lmao, they already have Starcraft AI, it's in the game dude
They got difficulties too
>>
File: KingofCool.jpg (176KB, 591x591px) Image search: [Google]
KingofCool.jpg
176KB, 591x591px
>>60562080
Killed him
>>
>>60565655
no it doesnt. It uses monte carlo methods and adversarial networks to reduce the necessary computation for a brute force. It looks at the best moves in general cases and only expands those.

Please do not speak on what you do not know. You are the downfall of society.
>>
>>60584153
It does learn but not while its playing.

It has to use expert players to decide what the "smart" moves are and only brute force those paths.
>>
>>60576182
woah
>>
Wow, /g/ is so fucking retarded, why people try to discuss they're unfamiliar with?
>>
>>60583696
>ancient board game Go
>ancient
what
>>
>>60584342
First day on 4chan?
>>
>>60559305
this is where you belong >>>/out/
>>
File: 1494968002370.png (86KB, 268x309px) Image search: [Google]
1494968002370.png
86KB, 268x309px
>>60563586
What the fuck man
>>
ITS OVER
HUMANS FINISHED AND BANKRUPT
>>
>>60581979
I could see a machine learning AI becoming unbeatable at Starcraft but the biggest issue is that in the beginning it can't see what the opponent is doing, so it will probably develop a single route of expansion while that is unknown which humans would exploit.
If they made sure it could pick from several different early game plays then it would be more unpredictable for a human player to act in the early game and then it would overcome the mid and late game.
>>
>>60585263
you're assuming the machine is entirely deterministic
>>
>>60582193
>SC would likely end up with the AI having ridiculous APM.
They're capable of super high APM but the AI is still so dumb that even relatively minor stuff like scenery can cause them to trip up and ruin their efficiency. They're also not capable of change tactics on the fly so they can easily get caught in traps and end up wasting their resources/units pointlessly.
>>
>>60564812
t. someone who doesn't know what hes talking about
>>
>>60585871
AlphaGO is a proof that that is no longer the case.
AlphaGO can, and does, change tactics on the fly.
>>
Pair go is starting in like 2 hours. so we'll be able to see humans compete in who can fuck everything up the most
>>
>>60585916
>t. someone who doesn't know what hes talking about
Go's positions are large, but it is finite. It is one thing to say it is difficult to solve, but it is another to say it is impossible to solve. We have solved the game on smaller Go boards. There is no inherent issue with solving the full size board other than a question of quantity and time. There are plenty of impossible questions, solving Go isn't one of them.
>>
>>60586189
>Pair go is starting in like 2 hours. so we'll be able to see humans compete in who can fuck everything up the most
And even better, we will see what Alphago will do when losing, after Deepmind solved the bug from last year. One side has to lose after all. I definitely see why the pair game even exists. We will see the loss of Alphago, and if it would go crazy or not.
>>
>>60586201
It's impossible to solve because there isn't enough usable energy to do the necessary calculations.
We're already limited in solving NP-complete problems and Go is way above that.
>>
>>60586287
>It's impossible to solve because there isn't enough usable energy to do the necessary calculations.
That just meant your computer isn't efficient enough. There is a reason the new Alphago is ten times more energy efficient than before.
>>
>>60586305
>I literally know nothing about computation theory
>>
>>60586305
you are a retard.
Efficency has nothing to do with the property of a problem to be "solvable"

please never post on /g/ again
>>
>>60572439
>http://www.trunews.com/article/google-ai-defeated-chinese-master-in-ancient-board-game

For a position in chess there's only so many possible/logical moves you can take. In Go you can place a piece nearly anywhere, and have multiple "fronts" where a battle is happening over territory on the board.
>>
File: 1407992244128.jpg (325KB, 900x1200px) Image search: [Google]
1407992244128.jpg
325KB, 900x1200px
>>60565463
>kb
lol no. these models are close to a gigabyte in float32 decimals
>>
Haven't AIs been beating people at Board games for years now? Chess especially.

From the perspective of the computer, these games are probably pretty simple. I mean they're binary in nature, so for example there are 64 set positions on a Chess board, and you take set turns. On the other hand, making an AI that is decent at Total War seems impossible for Creative Assembly, because it's really an order of magnitudes more complex game.
>>
File: game_ais.png (76KB, 495x1013px) Image search: [Google]
game_ais.png
76KB, 495x1013px
>>60566762
>Le xkxd
>>
>>60563354
upvoted
>>
>>60586346
>Efficency has nothing to do with the property of a problem to be "solvable"
Smaller boards of Go games were solved. Thus it meant larger boards are solvable. As in, a solution physically exists.

Something is only unsolvable if there is no solution. But there IS a solution to Go, no matter how hard it is to find. And if the solution exists then it is solvable. Even if you increase the board size to near infinity. As long as the board isn't actually infinitely large, a solution exists for each board.
>>
>>60586376
>because it's really an order of magnitudes more complex game
Either that or because AI isn't their focus, time/money restrictions getting in the way and the whole "needing to run on commodity hardware" thing. I guess we'll really never know.
>>
>>60586478
We live in a universe with limited space and usable energy so just because something isn't infinite doesn't mean it's reachable.
>>
>>60586586
>We live in a universe with limited space and usable energy so just because something isn't infinite doesn't mean it's reachable.
Just because there are infinite possibilities doesn't mean we need to search each one. We just need a way for either white or black to force a win or force a draw. There is no need to calculate every possible move.
>>
>>60586615
How would you prove that certain branches aren't viable?
>>
>>60586478
>Smaller boards of Go games were solved. Thus it meant larger boards are solvable.
[citation needed]
>>
>>60586363
Prove it
>>
>>60586219
>We will see the loss of Alphago, and if it would go crazy or not.

Probably won't go crazy. AlphaGo has played millions of games against itself. It has lost just as many times as it has won
>>
>>60586920
Not him but it is common sense. The answer to beating the world champ at go isn't 7.62345, its a series of interrelated logic, formulas (etc) and "learnt" game states.
>>
>>60586979
No it's not. Do you even know how a neural net works? It's just weights attached to functions. Each weight is a float. There are probably not that many neurons so it wouldn't get anywhere near a gb
>>
it's starting https://events.google.com/alphago2017/
>>
>AlphaGo won by resignation after 156 moves.
>resignation

>resigns from game
>still wins

We are truly fucked
>>
>>60586719
>How would you prove that certain branches aren't viable?
By winning. This is called "soft solving". Hard solving is when you mathematically prove something is right or wrong. Soft solving just need to require consistent outcome. A game of Go is at most 200 or so moves. If I tell everyone what my moves are going to be in advance in certain situations, and they still can't beat me, then I soft solved it.
>>
>>60586964
>Probably won't go crazy. AlphaGo has played millions of games against itself. It has lost just as many times as it has won
Actually the point is that the older verson did go crazy but the Deepmind team didn't bother to analyse it. Because all they cared about is winning and not losing. They have since patched it and I am merely wanting to see the fruit of their labour.
>>
>>60559225
You're uninformed
>>
>>60586920
>>60587017

https://drive.google.com/file/d/0Bz7KyqmuGsilZ2RVeVhKY0FyRmc/view

This was the state of the art image-recognition network in 2014. It's 500mb. It's not directly comparable to AlphaGo's algo, but by today's standards, it is pretty trivial to train, whereas AlphaGo is at the cutting edge and effective training requires Google's specialized hardware. (Again, supervised learning vs reinforcement learning is on a completely different level, but we're talking about a model that can be trained on a home-GPU vs a model that has to be trained using Google's industry-level resources.)

I'm willing to bet that AlphaGo's parameters are at least an order of magnitude larger than this.
>>
Is AlphaGo free software?
>>
>>60588060
Neither code nor weights have been released
>>
>>60559176
old news
>>
>>60588060
>Is AlphaGo free software?
Not yet.

The latest version runs on a custom computer that Google actually rents out to people for cloud computing. So even though they haven't done it yet, it is not difficult for Google to start an Alphago rental service for pros, streaming Alphago's decisions to paying customers. The infrastructure is there.
>>
File: 1445372711908.jpg (94KB, 540x960px) Image search: [Google]
1445372711908.jpg
94KB, 540x960px
>>60586979
>>60588003
yah, these

>>60587017
yes and no. if you have a feed forward net and just want the weights, and will redefine the functions later, then sure. but if you have an RNN with states or don't want to redefine the architecture every time then you store the architecture as well.
>>
>>60587017
oh wait just reread this..
>there are probably not that many neurons
hahaha hahaha gtfo. lookup up hoe many parameters vgg16 has
>>
>>60588876
How many? More than 250,000?
>>
The black Alphago requested resignation for the team match. His human partner rejected it and want to play on.
>>
>>60589016
https://stackoverflow.com/questions/28232235/how-to-calculate-the-number-of-parameters-of-convolutional-neural-networkscnns

138M.
>>
>>60589107

"Sir, the possibility of successfully navigating an asteroid field is approximately three thousand seven hundred and twenty to one."

"Never tell me the odds!"
>>
Chinks btfo
>>
>>60562129
>>60562239
keyboard and mouse events executed within the range of human possibility
>>
>>60589244
Wrong
>>
>>60589994
ok
>>
do you guys realise that, everytime you post on 4chan, you are helping the NWO-ZOGs AI terminator-robocop improve his object recognition capabilities? After 5 million captchas asking to check all squares showing 'X', the algorhytm can perform increasingly well on its own.

Brilliant eh...
>>
>>60590048
bots will never beat humans in shitposting
>>
>>60563328
>t. Doesn't know how machine learning works
>>
>>60565463
>sigmoid function
What decade are you living in? Get on with the times.
>>
>>60590209
b-but LSTMs have sigmoids
>>
>>60590177
I dunno, Microsoft Tay was learning awful quick before they pulled the plug
>>
ooooh, so that's why they named the language Go
>>
File: 1486626193726.jpg (99KB, 631x873px) Image search: [Google]
1486626193726.jpg
99KB, 631x873px
>>60588289
The term cloud computing is a marketing buzzword with no clear meaning. It is utilized for a range of different activities whose only common characteristic is that they use the Internet for something beyond transmitting files. Thus, the term is a nexus of confusion. If you base your thinking on it, your thinking will be vague.

When thinking about or responding to a statement someone else has made using this term, the first step is to clarify the topic. Which kind of activity is the statement really about, and what is a good, clear term for that activity? Once the topic is clear, the discussion can head for a proper conclusion.

Curiously, Larry Ellison, a proprietary software developer, also noted the vacuity of the term cloud computing. He decided to use the term anyway because, as a proprietary software developer, he isn't motivated by the same ideals as we are.

One of the a substantial number meanings of cloud computing is storing your data in online services. That exposes you to surveillance.

Another meaning (which overlaps that but is not the same thing) is Software as a Service, which denies you control over your computing.

Another meaning is renting a remote physical server, or virtual server. These can be ok under certain circumstances.
>>
>>60587569
AlphaGO didn't resign you imbecile.

>boxer won by knock out
>knock out

>gets knocked out
>still wins
This is you.
>>
>>60586522
Well I actually only had battles in mind, but obviously it applies to the campaign map too. Within a single battle, it's real time, there's a (basically) an infinite amount of positions a unit can be in, 360 directions a unit can face, terrain difference, morale, stamina, hundreds of different unit types, etc etc. Humans can use their own initiative for these things.To a computer this is insanely complex. The real-time aspect alone does this.
Thread posts: 307
Thread images: 22


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.