/philosophy of science/
What is the next big thing in science and why is it AGI?
What the hell is AGI?
And the next big thing in science is the shift from linearity to non-linearity
>>430538
Artificial General Intelligence
It will transcend human cognition rendering mathematics obsolete.
>>430553
oh, so a fancy word for full AI
I sure hope so
>artificial intelligence
>>>r/sciencefiction
>>>/lesswrong/
>>>/michiokaku.org/
>>430553
Back to LessWrong with you, faggot.
>It will transcend human cognition rendering mathematics obsolete
Did you not learn anything from the fallacy of induction?
I tip my fedora every day at the prospect.
It's a pity that I soil my comic-book themed XL t-shirts while doing so, but I can't seem to stop. And you?
>>430553
Holy shit is this dumbass for real?
>>431659
No shit it's not. It never was and never will be.
It's fucking hilarious to watch philosophers trying to understand Godel and apply where it doesn't belong, like this schmuck for example
http://users.ox.ac.uk/~jrlucas/Godel/mmg.html
>>431686
>implying AI needs to wallow in the mire of philosophy
>implying AI is not capable of thinking on a higher order of logic subsymbolically
stay pleb lel
>>431693
r e r e a d
e
r e a d
e
a
d
>>431693
dumbfuck
u
m
b
fuck
u
c
k
>muh robot god XDDDDDD
Fuck off scifitards
>>431693
>implying AI needs to wallow in the mire of philosophy
>implying AI is not capable of thinking on a higher order of logic subsymbolically
What hard AI? I can't see any. It's almost like you're pulling sci-fi prophecies out your ass.
>>431686
The entire point of that paper is to argue that it does 'belong', you can't just dismiss it out of hand without raising some substantive objections.
M E T A P H Y S I C S
E T A P H Y S I C S
T A P H Y S I C S
A P H Y S I C S
P H Y S I C S
H Y S I C S
Y S I C S
S I C S
I C S
C S
S
>>431693
>>implying AI needs to wallow in the mire of philosophy
Wait, so we should just blindly bang away at a computer and just hope for the best without reflecting on our practice?
>>>implying AI is not capable of thinking on a higher order of logic subsymbolically
What does this even mean? If AI's have to be computers (i.e. some sort of Turing-Machine) then they ARE just sophisticated symbol-manipulators, meaning that they could never do anything 'subsymbolically', unless you mean something completely differant by this.
>>430529
>What is the next big thing in science and why is it AGI?
>>430553
>Artificial General Intelligence
>>430529
it is metaphysics built on quantum logic
>>434680
Quantum logic is only really useful for modelling quiantum experiments. We won't really have a use in metaphysics for quantum logic unless you fancy denying the metaphysical law of non-contradiction.
>>434602
>what is neural network
>>435026
I have a knowledge of the Theorems, and I'll be the first to admit that I'm sceptical of Godelian arguments against brains being computers, but to simply dismiss the argument out of hand just shows an ignorance of Lucas' argument, which is not obviously invalid or unsound.
>>435184
Not that anon but how is that even relevant to what he said?
>>435552
neural network and deep learning are subsymbolic approaches that operates one level beneath symbols. This makes it hard to debug the trained coefficients. Really the only way we determine whether it works or not is the input and output layer, the rest is hidden.
>>435585
But then if neural networks are 'subsymbolic' then how could they possibly be described by Turing machines?
>>435620
Symbols in CogSci and GOFAI(good old fashioned AI) refer to LISP-like language symbols. The Turing symbols are a finite set of characters.
>>435585
It's handwaving that doesn't work
>>434660
That's my undergraduate textbook. I'm a scruffy so I disagree with R&N's high-level statistical approach.
>>435678
Tell that to Andrew Ng