I believe I can write a simple program which exhibits a very basic ability to be conscious and aware of itself, its processes, and the input which it receives. I believe that if basic AI is integrated into my program's algorithm, self-conscious and self-aware AI will emerge.
Am I crazy? I won't even say how I will do that because I'm afraid that it's a completely loony idea.
I can write it too, watch.
If this is a sentence, it must choose to edit itself on 4chan servers and post a, "Q" if it doesn't then it has chosen not to.
The outcome of my post will determine it's self-conscious choice.
Ok, I will post it here, and let you decide.
I reasoned as follows, and made the following assumptions:
1) Naturalism should be assumed. We make an assumption that the real world is as it is. There's nothing but the physical world, and there's no substance dualism - "mind" doesn't exist as something separate from the physical world. It's merely a part of the physical world.
2) By creating algorithms which are isomorphic to the way some processes behave in the real world, these processes can be simulated.
3) Lambda calculus is among the best tools for modeling these algorithms, as it's Turing complete and has a very big expressive power.
4) Consciousness is nothing more but the ability to reference and process the mechanisms which are related to the scope of awareness (perception of sense data, of one's cognitive processes, etc.)
5) Once the assumption 4) has been made, it is clear that a very basic form of consciousness has the following form in the Lambda notation:
λx.[...](λy.[...]( ... ))
Where [...] are arbitrary operations.
The λx is what references the processes related to awareness (defined by λy). [...] are arbitrary operations in some typed Lambda calculus.
The second [...] are arbitrary operations which transform the input y. The first [...] are operations related to conscious reference of one's own processes, which are defined by λy. This is correct by definition.
This is merely a basic description of consciousness. I believe that the basic schema - its definition - boils down to this Lambda expression. It can be more complex, but the basic schema is as I described.
Also, the second lambda (λy) must express the entirety of processes related to awareness/processing of some data.
If λy describes some complex processing of some data in a given typed Lambda calculus, the program is still conscious at any instant when the processing is active, as long as it follows the schema defined by 5.
It's just that it's conscious, but not in the way we're used to understand this term. It still has the basic properties of being conscious, and can potentially be made to self-modify by being aware of one's definitions, expressions, processes, and by having modification-related algorithms implemented into the λx lambda expression.
Also, forgot to add:
As long as the entirety of the Lambda schema provided in 5) is executed over a continuous time range, the process is self-aware and conscious in a continuous time range. But, for any given time, only in the scope which is provided at any given time to its algorithm.
Of course execution in continuous time is impossible, but the same thing applies for any given discrete time execution - even if the discrete time in question has irregular execution intervals.
One final thing before I go to sleep:
From my thesis, it can be inferred that consciousness and intelligence are two different things. True AI can be highly intelligent but not conscious. Similarly, a rather basic algorithm can be conscious by all standards, yet unintelligent.
>4) Consciousness is nothing more but the ability to reference and process the mechanisms which are related to the scope of awareness (perception of sense data, of one's cognitive processes, etc.)
This claim is completely baseless.
It's an assumption based on the physicalist interpretation of consciousness.
I won't cover any other possible philosophies of consciousness since they aren't based in the real world. That is, if you believe that consciousness is some magic entity which is somehow not physical, I can't help - you're free to reject my assumptions and my reasoning.
However, if you have a better idea of what consciousness is, and you can define it in terms of the physical world, feel free to use the first three assumptions, and some typed Lambda calculus to define it better.
>It's an assumption based on the physicalist interpretation of consciousness.
It is a physicalist description of *something*, but there's no reason to assume it is sufficient or even necessary for consciousness.
your claim 4 is highly suspect. Read up on a dualist zombie. Basically the idea that you could make a machine that behaves conscious without actually having any sort of subjective experience.
Nobody really knows if that is possible, it may be that any device that an act conscious must be conscious. Nobody knows.
4) is a definition of consciousness, the same way IQ is a definition of the extent of intelligence. You can't prove nor disprove it.
However, 4) hasn't been put to practice yet, unlike IQ tests. Only when the definition 4), along with the reasoning I provided will be put to practice extensively enough will it be seen whether my reasoning is right. I decided on 4) by making an educated guess, as well as internally analyzing why I think I'm conscious.
Also, it's impossible to prove that a given set of conditions are sufficient for consciousness. Only by making a priori assumptions and then applying our reasoning shall any advancements in designing conscious machines be made.
have you looked into modern machine learning?
Basically the cutting edge of AI is on the same wavelength as you are, but they have algorithms to create the mathematical functions required as opposed to sitting down and trying to figure out the proper functions by ha.d
>Only when the definition 4), along with the reasoning I provided will be put to practice extensively enough will it be seen whether my reasoning is right
No, you can never see whether or not your system is actually conscious. You can't measure qualia.
Think of it this way:
We're only able to tell whether we're conscious or not by referencing our own mechanisms of thought, awareness, and so on.
λx.[...]. ... λx.[...] ... .(λy.[...]( ... ))
A new lambda x is added, which described an algorithm of referencing consciousness.
We're only able to be conscious by directly processing our own mechanisms of thought, awareness, and so on:
λx.[...](λy.[...]( ... ))
A so-called philosophical zombie would have no main x-lambda to reference his own processes of awareness, thought, and so on. We can prove whether that zombie has consciousness by studying their brain and checking whether reference of his physical processes which lead to awareness and thought is made in his brain. (This is a hypothetical response to a hypothetical situation).
If we agree on this lamda consciousness scheme, we can then prove whether someone/thing is conscious, by seeing whether the structure of their physical processes is similar to the structure of the consciousness lambda scheme.
We have to make an assumption to start getting somewhere, don't we? Or, we could go on forever about how it's impossible to study consciousness. It's possible, but one has to make some assumptions before starting.
I say go for it, but you should look into modern day machine learning.
There are several algorithms they use, such as backproegation, to train neural networks.
Each network consists of layers of nodes. Each node has a function associated with it. Data is passed through my node to node. they then tune these networks with algorithms to get the optimal combination of functions for a given task.
I'm aware of the use of ANNs in AI.
There won't be any proper advancements in that field (ANNs) unless ANNs will be made more similar to the human brain.
ANNs have to have parts, defined by complex combinations on weights over some part of an ANN.
These parts could be connected to each other, and each be responsible for different stuff.
Also, what could be even better is this:
Make lots of ANNs; feed their output into logic gates. Create desired algorithms with logic gates. There you have it - the power of pattern recognition of ANNs, combined with the power of lightning fast logic of computers.
I simulated evolution in Minecraft using F#. The creatures all had traits that either gave them an increased chance of surviving or a decreased chance of surviving. As time passes these creatures will eventually achieve self-awareness as self-awareness is the ultimate path to survival.
Instead of treating ANN as a single whole, which is inefficient, make lots of ANNs which are responsible for each given task. Then, connect their inputs together with logic gates/program code/whatever.
It sounds so obvious but it isn't being done enough.
Also, if consciousness it to be implemented into ANN-based AIs, the program only has to be conscious of the outputs of ANNs and further processes. It doesn't have to be conscious of its ANNs. After all, humans aren't conscious of their own neural networks, are they?
Given your #1 assumption there, then yes, obviously, a computer can be exactly as conscious as a human. In fact, that's what your assumption states, directly. You could have just written your #1 and stopped there.
Or you could have reworded the whole thing like this:
> I believe that there is no difference between a conscious mind and a computer simulation, therefore, I believe that it's possible for a computer to simulate a mind.
Get the NSA on the phone and let them know we've solved the AI problem.
I'm not mean!
Also I think neural network aren't efficient enough to make a strong AI alone.
They only are decent when it comes to identify images and sounds, and even then they are incredibly wasteful.
Check this out, I trained a net to guess the next character off of the current character.
Then I gave it a random character, and had it guess the next one, and then passed its own guess as the input. By doing this I could generate text. I used bible verses, but the results were pretty cool. Notice how not only did it learn that characters must be arranged in words, but it even learned that text must be arranged in bible verses which start with number:number.
10:10 And the sons of Jacob.
36:11 And the sons of the wicked with me, And the sons of men, Thou hast acted me, And the wicked the seven good kine are speaking unto the house of my brother, and saith, `I have seen a wife of the bag of the age I call the day.
3:16 And Jehovah saith unto the house of Jehovah, Who hath begotten Lot;
19:10 And he saith, `God hath been consumed for the sake of the land, and the servants of the
heavens been his mother, and saith, `Lo, the sons of men.
21:16 And Jehovah saith unto the sons of the land of Egypt, and he saith, `I have seen it -- a woman whom His wife, and he saith unto Jacob, `Lo, I am declare to him, `Be not a foolish man whom He hath been seen the heavens do all the field of Machpelah, which she hath been found grace in thine eyes, and seeth the sons of men, And the perfect and six.
26:16 And he saith, `I [am] God hath showed us, O Jehovah, And the sons of Heth, and have been cut off, And
ooooo I gotcha.
Doesn't a bigram only look at the previous token? LSTM networks are trained to be able to look arbitrarily far back when determining the next token.
A bigram is P(n /n-1), but an LSTM network calculates P(n / n-1,n-2,n-3.n-4....).
LSTM networks are not the only way to implement this, but in practice I have not seen better performing methods.
Oh my bad, so to activate it I only pass in the last letter, however the nodes in the network have internal state, and "remember" previous inputs. The number of steps they remember for is a parameter that they are trained on.
the cool thing about it is the distance they look backwards is a function of the input. So the internal logic may be like "if the current input is an h, check to see if the last input was a t. If so increase the probability of a vowel coming next." By having hundreds of nodes in a layer, you can look for many of these patterns. The cool thing is that by then having multiple layers, your next layer can check for patterns in the patterns, allowing for coherent sentences to be formed.
What I pasted was made with 3 layers. It is consistently spelling stuff correctly, and usually any 3 or 4 words in a row makes sense. Occasionally it can create a long string that is grammatically correct, such as this one:
I am being unto me, and I have been high to the earth, And the field which [is] in the house of the wicked doth judge the man who hath not said to him, `They have been my heart
It sounds all old timey cuz I used a 1800s translation of the bible as the training set.
"What it is like." For example, that color you see when you look at an apple. With regards to physics, when you describe the light bouncing off the apple, there is no "redness" in the description. A wavelength, sure, but no property of "redness." But you still somehow experience that "redness."
One camp thinks qualia is/will be fully explainable in terms of modern physics, the other camp thinks it is impossible in principle to ever explain qualia in terms of modern physics, and that physicalist explanations only seem to succeed because they assume/sneak in the very elements they attempt to explain.
Watching AI lectures from MIT made the whole field less mystifying. Don't get me wrong, the algorithms and ideas that are presented and used are genius, but the whole thing seems to just boil down to computers running sets of rules we give them really fast. With some RNG thrown in there for things like for genetic algorithms. As for consciousness, the best place to start is with various phil of mind books.
I really do think we can get AI in our lifetime, but I'm not sure if we can build one that has a soul (or at least experiences qualia), if those are real. I suspect we may just create a dualist zombie.
no, but I think a lot of people think that. I'll check out his stuff though, seems cool.
Basically I got convinced by this argument https://en.wikipedia.org/wiki/Knowledge_argument and never went back, although I do think a lot of dualism vs physicalism is just semantics.
If you want a conscious machine you'll need a way to reliably emulate all parts of the brain in a virtual environment, provided it's even possible to write something that mimics a biological organism that way.
Nowadays we struggle to emulate PS2 games, so derive your conclusions from that.
As it stands there is no way to know if that will really work. It may be that a simulated brain is still a dualist zombie. It may also be that an AI made using a different technique really will be conscious. We just don't have any way to know what causes qualia to exist. It could be that everything has qualia, who knows. An AI just needs intelligence though, not consciousness or qualia.
Unless physicalism is true, in which if it acts like a conscious being it probably is a conscious being.
I think he was saying that would be a good way to make a machine that experiences qualia, not that it is the only way to make an intelligent machine. Those aren't quite the same thing.
>If you want a conscious machine you'll need
As in, not optional. Not one out of many ways of doing it. This is the one, singular way of making a conscious machine, as confidently proclaimed by a know-nothing on the internet who doesn't have the foggiest idea what the requirements of consciousness actually are.
I don't know if simulating the brain in a virtual environment is the ONLY way to obtain consciousness, I just think it's your best bet since a brain with electrical signals coursing through it is currently the only thing in nature that we can for sure say experiences being conscious.
This whole debate is stupid since we know way too little about how the brain works in the first place.
It's too early for these sorts of discussions.
>I don't know if simulating the brain in a virtual environment is the ONLY way to obtain consciousness
I can really see a future strong AI start off as a set of shell functions and utilities in a GNU/Linux operating system.
> 26.5 SUMMARY
>We have presented some of the main philosophical issues in AI. These were divided into questions concerning its technical feasibility (weak AI), and questions concerning its relevance and explanatory power with respect to the mind (strong AI). We concluded, although by no means conclusively, that the arguments against weak AI are needlessly pessimistic and have often mischaracterized the content of AI theories. Arguments against strong AI are inconclusive; although they fail to prove its impossibility, it is equally difficult to prove its correctness. Fortunately, few mainstream AI researchers, if any, believe that anything significant hinges on the outcome of the debate given the field's present stage of development. Even Searle himself recommends that his arguments not stand in the way of continued research on AI as traditionally conceived.
Wow, thanks for the shitposting, what a waste of time.