[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vip /vp / vr / w / wg / wsg / wsr / x / y ] [Search | Home]
4Archive logo
I believe I can write a simple program which...
If images are not shown try to refresh the page. If you like this website, please disable any AdBlock software!

You are currently reading a thread in /sci/ - Science & Math

Thread replies: 80
Thread images: 7
File: 1414605957755.jpg (17 KB, 640x359) Image search: [iqdb] [SauceNao] [Google]
1414605957755.jpg
17 KB, 640x359
I believe I can write a simple program which exhibits a very basic ability to be conscious and aware of itself, its processes, and the input which it receives. I believe that if basic AI is integrated into my program's algorithm, self-conscious and self-aware AI will emerge.

Am I crazy? I won't even say how I will do that because I'm afraid that it's a completely loony idea.
>>
File: bait-in-eyes.jpg (20 KB, 477x347) Image search: [iqdb] [SauceNao] [Google]
bait-in-eyes.jpg
20 KB, 477x347
>>7791452
>>
>>7791452
you're a headcase mate
>>
>>7791452
What popsci video did you see this time on YouTube?
>>
>>7791452
hahaya the halting problem and whatnot haha ur dum gl consciousness is non-algorithmic hahaha abaha
>>
>believe
>believe
>Am I crazy?
>afraid that it's a completely loony idea
>>
>>7791452
Show us the YouTube video that got you the idea
>>
File: rule2-global.png (2 KB, 306x102) Image search: [iqdb] [SauceNao] [Google]
rule2-global.png
2 KB, 306x102
>>7791501
underage pls go
>>
>>7791452
I can write it too, watch.
If this is a sentence, it must choose to edit itself on 4chan servers and post a, "Q" if it doesn't then it has chosen not to.

The outcome of my post will determine it's self-conscious choice.
>>
>>7791494
>>7791519
Why are you assuming that I got this idea from a YouTube video?
>>
do it faggot right now in any coding language and post results to github
>>
Ok, I will post it here, and let you decide.

I reasoned as follows, and made the following assumptions:

1) Naturalism should be assumed. We make an assumption that the real world is as it is. There's nothing but the physical world, and there's no substance dualism - "mind" doesn't exist as something separate from the physical world. It's merely a part of the physical world.
2) By creating algorithms which are isomorphic to the way some processes behave in the real world, these processes can be simulated.
3) Lambda calculus is among the best tools for modeling these algorithms, as it's Turing complete and has a very big expressive power.
4) Consciousness is nothing more but the ability to reference and process the mechanisms which are related to the scope of awareness (perception of sense data, of one's cognitive processes, etc.)
5) Once the assumption 4) has been made, it is clear that a very basic form of consciousness has the following form in the Lambda notation:
λx.[...](λy.[...]( ... ))
Where [...] are arbitrary operations.
The λx is what references the processes related to awareness (defined by λy). [...] are arbitrary operations in some typed Lambda calculus.
More precisely:
The second [...] are arbitrary operations which transform the input y. The first [...] are operations related to conscious reference of one's own processes, which are defined by λy. This is correct by definition.

This is merely a basic description of consciousness. I believe that the basic schema - its definition - boils down to this Lambda expression. It can be more complex, but the basic schema is as I described.
>>
>>7791682
Also, the second lambda (λy) must express the entirety of processes related to awareness/processing of some data.

If λy describes some complex processing of some data in a given typed Lambda calculus, the program is still conscious at any instant when the processing is active, as long as it follows the schema defined by 5.

It's just that it's conscious, but not in the way we're used to understand this term. It still has the basic properties of being conscious, and can potentially be made to self-modify by being aware of one's definitions, expressions, processes, and by having modification-related algorithms implemented into the λx lambda expression.
>>
>>7791696
Also, forgot to add:
As long as the entirety of the Lambda schema provided in 5) is executed over a continuous time range, the process is self-aware and conscious in a continuous time range. But, for any given time, only in the scope which is provided at any given time to its algorithm.

Of course execution in continuous time is impossible, but the same thing applies for any given discrete time execution - even if the discrete time in question has irregular execution intervals.
>>
One final thing before I go to sleep:
From my thesis, it can be inferred that consciousness and intelligence are two different things. True AI can be highly intelligent but not conscious. Similarly, a rather basic algorithm can be conscious by all standards, yet unintelligent.
>>
File: keepupthegoodwork.jpg (21 KB, 400x400) Image search: [iqdb] [SauceNao] [Google]
keepupthegoodwork.jpg
21 KB, 400x400
>>7791729
>>7791714
>>7791696
>>7791682
I only read the first few sentences but it's long and nicely written so I'm gonna agree with you
>>
>>7791682
>4) Consciousness is nothing more but the ability to reference and process the mechanisms which are related to the scope of awareness (perception of sense data, of one's cognitive processes, etc.)
This claim is completely baseless.
>>
>>7791789
It's an assumption based on the physicalist interpretation of consciousness.
I won't cover any other possible philosophies of consciousness since they aren't based in the real world. That is, if you believe that consciousness is some magic entity which is somehow not physical, I can't help - you're free to reject my assumptions and my reasoning.

However, if you have a better idea of what consciousness is, and you can define it in terms of the physical world, feel free to use the first three assumptions, and some typed Lambda calculus to define it better.
>>
>>7791822
>It's an assumption based on the physicalist interpretation of consciousness.
It is a physicalist description of *something*, but there's no reason to assume it is sufficient or even necessary for consciousness.
>>
>>7791452
what is your idea?
>>
>>7791682
your claim 4 is highly suspect. Read up on a dualist zombie. Basically the idea that you could make a machine that behaves conscious without actually having any sort of subjective experience.

http://plato.stanford.edu/entries/zombies/

Nobody really knows if that is possible, it may be that any device that an act conscious must be conscious. Nobody knows.
>>
>>7791875
4) is a definition of consciousness, the same way IQ is a definition of the extent of intelligence. You can't prove nor disprove it.

However, 4) hasn't been put to practice yet, unlike IQ tests. Only when the definition 4), along with the reasoning I provided will be put to practice extensively enough will it be seen whether my reasoning is right. I decided on 4) by making an educated guess, as well as internally analyzing why I think I'm conscious.

Also, it's impossible to prove that a given set of conditions are sufficient for consciousness. Only by making a priori assumptions and then applying our reasoning shall any advancements in designing conscious machines be made.
>>
>>7791902
have you looked into modern machine learning?

Basically the cutting edge of AI is on the same wavelength as you are, but they have algorithms to create the mathematical functions required as opposed to sitting down and trying to figure out the proper functions by ha.d
>>
>>7791902
I don't think anyone defines IQ as the extent of intelligence, except for /pol/tards and mensatards.
>>
>>7791902
>Only when the definition 4), along with the reasoning I provided will be put to practice extensively enough will it be seen whether my reasoning is right
No, you can never see whether or not your system is actually conscious. You can't measure qualia.
>>
>>7791930
>You can't measure qualia.
You forget to add "hurr durr" at the end.
>>
>>7791891
Think of it this way:

We're only able to tell whether we're conscious or not by referencing our own mechanisms of thought, awareness, and so on.
Consciousness schema:
λx.[...]. ... λx.[...] ... .(λy.[...]( ... ))
A new lambda x is added, which described an algorithm of referencing consciousness.

We're only able to be conscious by directly processing our own mechanisms of thought, awareness, and so on:
λx.[...](λy.[...]( ... ))

A so-called philosophical zombie would have no main x-lambda to reference his own processes of awareness, thought, and so on. We can prove whether that zombie has consciousness by studying their brain and checking whether reference of his physical processes which lead to awareness and thought is made in his brain. (This is a hypothetical response to a hypothetical situation).

If we agree on this lamda consciousness scheme, we can then prove whether someone/thing is conscious, by seeing whether the structure of their physical processes is similar to the structure of the consciousness lambda scheme.


We have to make an assumption to start getting somewhere, don't we? Or, we could go on forever about how it's impossible to study consciousness. It's possible, but one has to make some assumptions before starting.
>>
File: color blind test.jpg (74 KB, 549x551) Image search: [iqdb] [SauceNao] [Google]
color blind test.jpg
74 KB, 549x551
>>7791930
>You can't measure qualia.
>>
>>7791945

I say go for it, but you should look into modern day machine learning.

There are several algorithms they use, such as backproegation, to train neural networks.

Each network consists of layers of nodes. Each node has a function associated with it. Data is passed through my node to node. they then tune these networks with algorithms to get the optimal combination of functions for a given task.
>>
>>7791964
unless you can prove dualist zombies are impossible that proves nothing, as a dualist zombie would also be able to tell the red from the green.
>>
>>7791932
Oop sorry. Hurr durr.

>>7791964
That's a measure of objective detection or differentiation, not qualia, hurr durr.
>>
You can't program magic
>>
>>7791967
I'm aware of the use of ANNs in AI.
There won't be any proper advancements in that field (ANNs) unless ANNs will be made more similar to the human brain.
ANNs have to have parts, defined by complex combinations on weights over some part of an ANN.
These parts could be connected to each other, and each be responsible for different stuff.

Also, what could be even better is this:
Make lots of ANNs; feed their output into logic gates. Create desired algorithms with logic gates. There you have it - the power of pattern recognition of ANNs, combined with the power of lightning fast logic of computers.
>>
I simulated evolution in Minecraft using F#. The creatures all had traits that either gave them an increased chance of surviving or a decreased chance of surviving. As time passes these creatures will eventually achieve self-awareness as self-awareness is the ultimate path to survival.
>>
>>7791994
samefag;
Instead of treating ANN as a single whole, which is inefficient, make lots of ANNs which are responsible for each given task. Then, connect their inputs together with logic gates/program code/whatever.

It sounds so obvious but it isn't being done enough.

Also, if consciousness it to be implemented into ANN-based AIs, the program only has to be conscious of the outputs of ANNs and further processes. It doesn't have to be conscious of its ANNs. After all, humans aren't conscious of their own neural networks, are they?
>>
File: AI.png (262 KB, 697x534) Image search: [iqdb] [SauceNao] [Google]
AI.png
262 KB, 697x534
>>7791452
>>
>>7791994
>>7791999
Well go ahead and do it if it's so great. At the moment you're really only stringing some entry-level ideas together acting like it's profound insight.
>>
Are the dumbest people more conscience than the smartest machines?
>>
>>7791682
Given your #1 assumption there, then yes, obviously, a computer can be exactly as conscious as a human. In fact, that's what your assumption states, directly. You could have just written your #1 and stopped there.

Or you could have reworded the whole thing like this:
> I believe that there is no difference between a conscious mind and a computer simulation, therefore, I believe that it's possible for a computer to simulate a mind.

Get the NSA on the phone and let them know we've solved the AI problem.
>>
>>7792002

jeez that book is fucking expensive
>>
File: 715LP1P-GDL.jpg (223 KB, 647x1000) Image search: [iqdb] [SauceNao] [Google]
715LP1P-GDL.jpg
223 KB, 647x1000
>>7792092
>>
>>7792097
that ones a bit watered down
>>
>>7791682
What a big bunch of convoluted blabla
Yes consciousness can be conceived as a set of function acting on another set of function, we get it you big dumdum.
>>
>>7792160
I'm not him but you don't need to be mean. I personally think ANN are the way to go but at least he proposed some sort of method. (How he will pick the proper functions eludes me)
>>
>>7792164
I'm not mean!
Also I think neural network aren't efficient enough to make a strong AI alone.
They only are decent when it comes to identify images and sounds, and even then they are incredibly wasteful.
>>
>>7792170


Check this out, I trained a net to guess the next character off of the current character.

Then I gave it a random character, and had it guess the next one, and then passed its own guess as the input. By doing this I could generate text. I used bible verses, but the results were pretty cool. Notice how not only did it learn that characters must be arranged in words, but it even learned that text must be arranged in bible verses which start with number:number.
10:10 And the sons of Jacob.
36:11 And the sons of the wicked with me, And the sons of men, Thou hast acted me, And the wicked the seven good kine are speaking unto the house of my brother, and saith, `I have seen a wife of the bag of the age I call the day.
3:16 And Jehovah saith unto the house of Jehovah, Who hath begotten Lot;
19:10 And he saith, `God hath been consumed for the sake of the land, and the servants of the
heavens been his mother, and saith, `Lo, the sons of men.
21:16 And Jehovah saith unto the sons of the land of Egypt, and he saith, `I have seen it -- a woman whom His wife, and he saith unto Jacob, `Lo, I am declare to him, `Be not a foolish man whom He hath been seen the heavens do all the field of Machpelah, which she hath been found grace in thine eyes, and seeth the sons of men, And the perfect and six.
26:16 And he saith, `I [am] God hath showed us, O Jehovah, And the sons of Heth, and have been cut off, And
>>
>>7792175
That's what neural network fag actually believe.
>>
>>7792192

If you know how to make a better text generator than I am all ears.
>>
>>7792319
the internet
>>
>>7792337
What?
>>
>>7792175
Nigga. You just made a bigram.
>>
>>7792631
ooooo I gotcha.

Doesn't a bigram only look at the previous token? LSTM networks are trained to be able to look arbitrarily far back when determining the next token.

A bigram is P(n /n-1), but an LSTM network calculates P(n / n-1,n-2,n-3.n-4....).

LSTM networks are not the only way to implement this, but in practice I have not seen better performing methods.
>>
>>7793478
Ok. The way you described it basically made it sound like you were literally just taking an element of sequence and guessing the next.
>>
>>7791452
Fucking contribute and write that bastard. If you succeed you're a genius, if you fail, you'll have all the time to fix the faults.
>>
>>7793537
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
>>
>>7793537

Oh my bad, so to activate it I only pass in the last letter, however the nodes in the network have internal state, and "remember" previous inputs. The number of steps they remember for is a parameter that they are trained on.

the cool thing about it is the distance they look backwards is a function of the input. So the internal logic may be like "if the current input is an h, check to see if the last input was a t. If so increase the probability of a vowel coming next." By having hundreds of nodes in a layer, you can look for many of these patterns. The cool thing is that by then having multiple layers, your next layer can check for patterns in the patterns, allowing for coherent sentences to be formed.

What I pasted was made with 3 layers. It is consistently spelling stuff correctly, and usually any 3 or 4 words in a row makes sense. Occasionally it can create a long string that is grammatically correct, such as this one:

I am being unto me, and I have been high to the earth, And the field which [is] in the house of the wicked doth judge the man who hath not said to him, `They have been my heart


It sounds all old timey cuz I used a 1800s translation of the bible as the training set.
>>
>>7791452

OP, do you know how to code? You should set up a github for this. Python is a good language for doing AI work.
>>
What is qualia
>>
>>7791452
Then do it.
>>
>>7794008

"What it is like." For example, that color you see when you look at an apple. With regards to physics, when you describe the light bouncing off the apple, there is no "redness" in the description. A wavelength, sure, but no property of "redness." But you still somehow experience that "redness."

One camp thinks qualia is/will be fully explainable in terms of modern physics, the other camp thinks it is impossible in principle to ever explain qualia in terms of modern physics, and that physicalist explanations only seem to succeed because they assume/sneak in the very elements they attempt to explain.
>>
Watching AI lectures from MIT made the whole field less mystifying. Don't get me wrong, the algorithms and ideas that are presented and used are genius, but the whole thing seems to just boil down to computers running sets of rules we give them really fast. With some RNG thrown in there for things like for genetic algorithms. As for consciousness, the best place to start is with various phil of mind books.
>>
>>7794271
yes, computers are gona compute lol.
>>
>>7794271
I really do think we can get AI in our lifetime, but I'm not sure if we can build one that has a soul (or at least experiences qualia), if those are real. I suspect we may just create a dualist zombie.

http://plato.stanford.edu/entries/zombies/
>>
>>7794323
Is that you, Chalmers?
>>
>>7794326
no, but I think a lot of people think that. I'll check out his stuff though, seems cool.

Basically I got convinced by this argument https://en.wikipedia.org/wiki/Knowledge_argument and never went back, although I do think a lot of dualism vs physicalism is just semantics.
>>
If you want a conscious machine you'll need a way to reliably emulate all parts of the brain in a virtual environment, provided it's even possible to write something that mimics a biological organism that way.

Nowadays we struggle to emulate PS2 games, so derive your conclusions from that.
>>
>>7794419
>If you want a flying machine you'll need a way to reliably emulate all parts of a bird

Wow, you people are Sarah Palin levels of stupid.
>>
>>7794419

As it stands there is no way to know if that will really work. It may be that a simulated brain is still a dualist zombie. It may also be that an AI made using a different technique really will be conscious. We just don't have any way to know what causes qualia to exist. It could be that everything has qualia, who knows. An AI just needs intelligence though, not consciousness or qualia.

Unless physicalism is true, in which if it acts like a conscious being it probably is a conscious being.
>>
>>7794428

I think he was saying that would be a good way to make a machine that experiences qualia, not that it is the only way to make an intelligent machine. Those aren't quite the same thing.
>>
>>7794436
>If you want a conscious machine you'll need
>need
>NEED

As in, not optional. Not one out of many ways of doing it. This is the one, singular way of making a conscious machine, as confidently proclaimed by a know-nothing on the internet who doesn't have the foggiest idea what the requirements of consciousness actually are.
>>
>>7794444
I wish we could both (or either one of us, really) be alive when scientists of the future show that I'm 100% correct here.
>>
>>7794444

Oh, I guess he did say need. I think his idea would be a good way to try to make a machine with qualia. Unfortunately there still is no way to know if you succeed or not. We can't even know if other people have qualia.

>>7794449
maybe all three of us will be.
>>
>>7794449
Why don't you just show that you're correct now, using evidence and reason?

(it's because you're making all this shit up)
>>
>>7791682
Word salad: the post

Looks like someone watched a few too many youtube videos
>>
>>7794454
I don't know if simulating the brain in a virtual environment is the ONLY way to obtain consciousness, I just think it's your best bet since a brain with electrical signals coursing through it is currently the only thing in nature that we can for sure say experiences being conscious.

This whole debate is stupid since we know way too little about how the brain works in the first place.

It's too early for these sorts of discussions.
>>
>>7794469

>It's too early for these sorts of discussions.

Agreed, these are fun but honestly can wait till after we have something that can consistently beat the turing test.
>>
>>7792092
Just fucking google and you'll find the pdf, retard.
>>
>>7794486
thanks!
>>
>>7794469
>I don't know if simulating the brain in a virtual environment is the ONLY way to obtain consciousness
I can really see a future strong AI start off as a set of shell functions and utilities in a GNU/Linux operating system.
>>
>>7792002
>>7794486
> 26.5 SUMMARY
>We have presented some of the main philosophical issues in AI. These were divided into questions concerning its technical feasibility (weak AI), and questions concerning its relevance and explanatory power with respect to the mind (strong AI). We concluded, although by no means conclusively, that the arguments against weak AI are needlessly pessimistic and have often mischaracterized the content of AI theories. Arguments against strong AI are inconclusive; although they fail to prove its impossibility, it is equally difficult to prove its correctness. Fortunately, few mainstream AI researchers, if any, believe that anything significant hinges on the outcome of the debate given the field's present stage of development. Even Searle himself recommends that his arguments not stand in the way of continued research on AI as traditionally conceived.

Wow, thanks for the shitposting, what a waste of time.
Thread replies: 80
Thread images: 7
Thread DB ID: 435957



[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vip /vp / vr / w / wg / wsg / wsr / x / y] [Search | Home]

[Boards: 3 / a / aco / adv / an / asp / b / biz / c / cgl / ck / cm / co / d / diy / e / fa / fit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mu / n / news / o / out / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / t / tg / toy / trash / trv / tv / u / v / vg / vip /vp / vr / w / wg / wsg / wsr / x / y] [Search | Home]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the shown content originated from that site. This means that 4Archive shows their content, archived. If you need information for a Poster - contact them.
If a post contains personal/copyrighted/illegal content, then use the post's [Report] link! If a post is not removed within 24h contact me at [email protected] with the post's information.