[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

Hey /sci/. AI researcher here. I'm trying to figure a way

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 66
Thread images: 10

Hey /sci/. AI researcher here. I'm trying to figure a way to represent complex human intuition using a mathematical structure, but some parts of natural language don't really fit in any obvious ways that I can find. Language itself isn't the problem here; it has a grammar and discrete structure that's easy enough to model, but certain words and phrases are hard to represent mathematically. I'm trying to translate our oldest philosophical quandaries into more mathematical terms so if you'll bear with me for a moment...

>What happens when an unstoppable tessellation meets an immovable singularity?

Does this seem close enough to capturing the essence of the philosophical dilemma as you understand it? What terms would you use? Does it seem like this is a useful exercise for bridging the gaps between human understanding and machine intelligence or is algorithmic intelligence still too fuzzy a concept to have intuitions about?

Please ignore the quandary itself, if at all possible.
>>
"I think therefore I am"

scrap the tesselation vs singularity..

Because that will give you errors and cause confusion it would be better to input this after.
>>
>>7780345
This is basically popsci jibberish. Do you know anything about philosophy at all?
>>
Friendly warning your AI will be subsumed.
>>
File: Cogito ergo sum.jpg (284KB, 824x834px) Image search: [Google]
Cogito ergo sum.jpg
284KB, 824x834px
>>7780486
On that <- (picture) note, let me know when you've managed to become 'God'.
>>
File: 1451882696965.jpg (74KB, 500x494px) Image search: [Google]
1451882696965.jpg
74KB, 500x494px
>>7780527
I have always been and forever will be I am the Alpha and Omega the Beginning and the End.
>>
>>7780591
My Lord, you're real!
>>
>>7780345
Chris Langan, pls go
>>
File: 1448872224602.jpg (8KB, 300x168px) Image search: [Google]
1448872224602.jpg
8KB, 300x168px
>>7780620
Hallelujah!!
>>
>>7780629
That moment is turns out Hitler was the 'Second Coming'.
>>
>>7780486
AI will have to realize that on their own. I don't feel comfortable making something self-aware because Buddhism. I figure if I program them well enough, they won't have to experience suffering but will still have the general intelligence or cognitive faculties necessary to realize such things if and when they become relevant to the individual AI.
>>7780489
That's a separate issue. I've been dealing with that issue for awhile now. Made this thread to help get some individuality and machine empathy going.
>>7780527
AI are far from gods. Singularity mysticism is >>>/x/-tier.
>>7780626
No idea who that is.
>>
File: facepalmstatue.jpg (106KB, 1024x683px) Image search: [Google]
facepalmstatue.jpg
106KB, 1024x683px
>>7780979
I was implying OP was trying to play 'God'.
>>
>>7780990
Yes, and I'm OP. If you confused me for a regular /sci/ poster, well, that's because I actually take it seriously. This is about communication, not divine transcendence or uploading or any of that sci-fi crap.
>>
Oh look, it's this thread again. Aren't you getting tired of your trolling yet, OP?
>>
>>7781049
Different content entirely. Please actually read the OP rather than commenting on the image. Pic is not related.
>>
>>7781050
Yes, it's different content, but it's the same type of woo-woo complete nonsense dressed up to sound impressive.
>>
>>7781054
It's not meant to sound impressive in the slightest. There are severe conceptual differences between machine intelligence and human intelligence. I'm trying to find a way to build common ground. There is absolutely nothing profound about the questions "What happens when an unstoppable force meets an immovable object?" It's a nonsense questions that's composed of grade school philosophy. My goal is to get AI to grade school levels of cognition.

Up from the current range of zero to half a worm's brain worth of cognition.

The problem isn't language itself, but certain words like "object" and "force." Because of our experience in the physical world, the concepts of objects and forces acting on them make a lot of intuitive sense to us. This doesn't work out so well in digital contexts because digital worlds don't follow the same rules at all. An AI looking at a game world would "see" empty boxes that were coordinated in funny ways with no real pattern to any of it. Right now, AI can't even tell our art apart from the real world. We're not going to get anywhere by failing to grasp the differences in achievable comprehension between the two types of intelligences here.

The question isn't how it sounds, but what would make it sound more comprehensive. I put "tessellation" in in place of "force" because I tend to think of the universe as a timeless form. I don't guarantee my use of the word is useful or accurate to mathematically mimic the concept of force at all. My question to you is what words YOU think would fit. The quandary itself is irrelevant; it's just an example of a thing we should expect AI to be able to cogitate at some point. The real issue here is finding words that the AI can understand in place of the intuitive ones we commonly use.
>>
>>7781082
Alright, so you're not done trolling yet. Have fun, then.
>>
>>7781086
Unless you don't believe AI is possible, I haven't a clue how you think this is trolling. These are hard research problems that thousands of academics take very seriously. If anyone's trying to troll here, it seems intuitively obvious to me that it'd be you. The difference is that I don't take my trite intuitions for decent discussion. If you don't have anything to post, just don't bother posting. There's nothing bad about letting a thread go unbumped.
>>
>>7781096
>Unless you don't believe AI is possible, I haven't a clue how you think this is trolling.
I'll explain, then: if this were a serious thread, you would have posted that elaboration in >7781082 in the OP, as opposed to leaving things as vague and confusing-sounding as possible in the OP to deliberately ensure a thread consisting of nothing but misunderstanding and bullshitting.
>>
>>7781129
Communication is a perpetual WIP to me. I've put a considerable amount of time into thinking about this and it has affected the way I communicate in certain situations. My last thread lopped too many in at once and was more stream-of-consciousness than actual communication. This time around I tried for something a bit more semiotic, which, in retrospect, is also probably a bit outside the norm for this board. I can understand if the question was strange, but it looks pretty straightforward to me. I don't know which part makes it confusing and I've yet to see anyone demonstrate any misunderstanding of the questions.

Again, I don't guarantee my intuition works well. If I said something confusing or if semiotics is too far outside the norm to discuss at length on this board, just let me know.
>>
>>7781140
>WIP
It occurs to me that this acronym might only be common in art communities. It stands for Work In Progress.
>>
The fuck are you talking about? That does not sound like any AI research I've ever heard of.


>Does this seem close enough to capturing the essence of the philosophical dilemma as you understand it?
No

>Does it seem like this is a useful exercise for bridging the gaps between human understanding and machine intelligence or is algorithmic intelligence still too fuzzy a concept to have intuitions about?
No

Go take some introductory AI class
>>
>>7781140
>lopped too many concepts in at once
>>
>>7781151
>That does not sound like any AI research I've ever heard of.
I'm coming at it from a semiotics approach rather than a discrete algorithms approach. Part of the idea of research is that some of it has to be original before science can move forward. Just trying to get a second opinion here. Thanks for providing some honest feedback.
>>
>>7781140
Alright, I may have misjudged the intent of this thread. That said:

>Communication is a perpetual WIP to me.
Ironic, considering the subject matter :)

>This time around I tried for something a bit more semiotic, which, in retrospect, is also probably a bit outside the norm for this board.
It's not outside the norm per se, but it IS something you'll have to explain. You're gonna have to explain your CONCEPTS in mathematical terms, or indeed in algorithmic terms.

In the OP, I haven't the foggiest idea (still) what problem you are trying to solve, and how that sentence figured into it. And I say that with a (fair but not research-level) AI background.

Words that are hard to represent mathematically, okay, I can understand that. What words, and what difficulties? I'm sure there are lots of challenges in formalizing particular linguistic concepts. What *exact specific* challenge are you thinking about? There are dozens of things you could reasonably mean. Both examples of the words and an explanation of the exact problem are strictly necessary.

>What happens when an unstoppable tessellation meets an immovable singularity?
A bullshit /x/-sounding sentence. What is the point of this sentence? How does it figure into the difficulty? We're not told.

>Does this seem close enough to capturing the essence of the philosophical dilemma as you understand it?
How does the rephrasing even have any bearing whatsoever on "capturing the essence?" without an explanation as such, you're just spouting gibberish.

>Does it seem like this is a useful exercise for bridging the gaps between human understanding and machine intelligence or is algorithmic intelligence still too fuzzy a concept to have intuitions about?
False dilemma much? The two legs of the dilemma don't appear to have any relation whatsoever. If there is a relation, do explain.

(continued, out of words)
>>
>>7781191
(continued)

>I've put a considerable amount of time into thinking about this and it has affected the way I communicate in certain situations.
In general, and on the internet in particular, this is a strong red alert for crackpottery. Consider that carefully if you want to be taken seriously. This thread is not nearly as bad as the previous one in this regard, but it's still WELL past the threshold.

>I'm coming at it from a semiotics approach rather than a discrete algorithms approach. Part of the idea of research is that some of it has to be original before science can move forward.
I do sympathize, but I fear neglecting the algorithms part of the problem is a hopeless dead end. Reducing intelligence to algorithms is the whole point of AI. Thinking in algorithmic terms is a pons asinorum to cross before you have any hope at all of achieving something in AI.

>The problem isn't language itself, but certain words like "object" and "force."
These concepts are mathematically MUCH clearer than most everyday life concepts humans talk about; the Kolmogorov complexity is much much lower. Why do you think these concepts in particular are tricky for AI? Lots and lots of things an AI should be able to reason about don't have any direct relevance for the AI itself, and this one would be easier to resolve than most, not harder.

>We're not going to get anywhere by failing to grasp the differences in achievable comprehension between the two types of intelligences here.
Do elaborate?
>>
>>7781162
Look even the psychology guys doing AI write up algorithms. Algorithms or GTFO.
>>
>>7781191
>may have misjudged the intent
That's the one thing I was trying to avoid. It's been a long time since I had any real discourse about this stuff. I'm still a bit out of touch and it shows in certain places.
>Ironic
You get used to it after awhile. AI is basically the science of comprehending comprehension itself. It can get pretty recursive and it does unfortunately have its overlap with woo language. Semiotics was a monumental discovery for me because it formalized a ton of things I'd had trouble explaining.
>I haven't the foggiest idea what problem you are trying to solve
There have been a lot of times that I wasn't sure of that either. Figuring out how things are figured out takes a lot of figuring out. I've had to change disciplines dozens of times over the years to get where I am today.

>Words that are hard to represent mathematically
They aren't. Most words are simply axioms that don't need many external symbolic references. It's not hard to represent MOST words and grammars computationally, it's just a few of the more familiar ideas that are hard to model. My personal goal is to write an algorithm that outputs a novel scientific thought that was completely alien to me. I want it to teach me something at some point. If it can't do that, it isn't a general intelligence in my book.

>What words, and what difficulties?
That's the hard part and that's where semiotics comes in. Grammar is structural and has tons of computational analogues. Things like "the" and "a" and "is" have common formal logical definitions in pretty much every computer language we've made so far. The short answer is that I don't know which all words are readily learned and which can only be learned through explicit sensory input or else with explicit supervised "teaching." In this case it's "object" and "force," but I don't really know what trajectory to go in to find other words. I'm hoping I'll get a sense of what to shoot for if I can get some other perspectives in the mix.
>>
One of two things is happening here
1. OP is autistic
2. OP is deliberately making things convoluted and vague to seem intellectual
>>
>>7781191
>formalizing particular linguistic concepts
Again; linguistic concepts are trivial to formalize. Language has a very simple structure, at least in comparison to high level concepts like philosophy. I have a decent understanding of common intuition and how to apply it algorithmically, but I don't have a solid grasp of the "boundary" of human intuition, if that makes sense. The concept is simple, but to instantiate it intelligently (let alone have it emerge under a discrete learning algorithm) is something else entirely.

>What *exact specific* challenge are you thinking about?
"If a machine can think and reason independently, but lacks reliable and continuous sensory access to the physical world, how do you communicate to it about the physical world in a way that both egos can verify the other has correctly understood?"

I guess I'm trying to cure machine autism, if that's a relatable way to put it.

>A bullshit /x/-sounding sentence. What is the point of this sentence? How does it figure into the difficulty?
It's supposed to reflect "What happens when an unstoppable force meets an immovable object?" It IS bullshit /x/-tier reasoning, but my goal is to get a machine to be capable of bullshit /x/-tier reason. I figure if an algorithm is to be generally intelligent, it has to at least be able to cogitate and dismiss all the same bullshit I can. If can't afford to have an algorithm go into an infinite loop because it hit the wrong agglomeration of words. It needs to be able to say, "That is incoherent gibberish" just as much as we do. It's not a difficult question on its own, its just a one-off thought I'm using as an example of something my AI should be able to think about.

The hard problem is communicating the question to the AI and verifying that it's been communicated accurately. I basically need a way to "teach" it to form its own intuitions about the physical world if I want to be able to have an interesting conversation with it.
>>
>>7781271 (cont)
Ideally, it'll form its own (correct) intuition about the properties of physical space and it'll see things in a novel way precisely because it sees it purely in a conceptual light. I basically want to create alien cognition rather than replicating human ingenuity.
>>7781191
>How does the rephrasing even have any bearing whatsoever on "capturing the essence?"
That's a tough one. "Essence" is simple enough to learn, but the phrase "capturing the essence" is a high-level cognitive concept that I'm still trying to work out. I'm not sure I can unambiguously explain what I mean by that if it's not already familiar to you in some way. It's an aesthetic principle that shows up in art, writing, and philosophy. It's sort of the holy grail of intuition, if you'll excuse the analogy.
>bridging the gaps between human understanding and machine intelligence or fuzzy concept
>False dilemma
If machine algorithm general intelligence has traits that organic human general intelligence does not, or if human general intelligence has traits that machine general intelligence does not, then it stands to reason that there will be gaps between the types of understanding each party can achieve.

Until we actually have an algorithm that learns to talk and starts having conversations with us, AI is a "fuzzy" concept. That is, until we actually do it, we don't know exactly what it'll be like. For all we know, we'll start talking to it and start to feel things that we didn't know could exist at all. It might just explain its analogue of "color" to use in a way that we end up developing an entirely new type of sensory perception. I severely doubt it, but it is an idea worth noting. We don't yet know how alien AI will be. I'm trying to study the gap itself, to whatever degree it exists and to whatever degree I can establish that it does, will, and can be studied in advance.

>>7781203
>this is a strong red alert for crackpottery
I'm keenly aware of that. I doubt myself often.
>>
>>7781203
>it's still WELL past the threshold.
Talking about the posts prior to >>7781096 or do you mean the discussion that started after that point? It's going pretty well right now despite the bumpy start. At least by my measure.
>neglecting the algorithms
I'm not neglecting it, but at this stage in my research I /have/ relegated it out to a separate part of the overarching problem of AGI. The main problem I keep hitting when I try to write an algorithm is my thoughts go incoherent and there's no obvious way to reason about how to get them coherent again. Basically I keep hitting the upper limit of my brain's current capacity to contain state bits that correlate to actual logic/reason/meaning.

Strangely, it doesn't feel like normal exhaustion.

>Do elaborate?
I'm honestly not sure how. I need to "see" how you understood that paragraph before I can know how to explain further. I've been trying to lace the idea into every post ITT because I don't know how else to reliably communicate it. I can't formalize it so I need to get the idea across intuitively. I hope you'll forgive me on that. Creating strong discussions on that topic has been exceedingly difficult for many more people than me.

>>7781210
Algorithms aren't relevant to the specific issue I'm dealing with ITT.
>>7781258
>1. OP is autistic
Wish I could tell you different, but when it comes to discussing AI, everyone seems autistic. I can't guarantee my behavior has been unaffected by my thoughts on this subject.

I don't really know which parts you consider vague or pseudointellectual. If you're just referring to the opening post itself, yes, it was entirely suboptimal.
>>
>>7780345
Good grief. Using inapt analogies with an inapt technology to model real things.
Just stop already.
>>
>>7780345

What does Artificial Insemination have to do with any of this?
>>
File: handsome_stranger.jpg (191KB, 873x881px) Image search: [Google]
handsome_stranger.jpg
191KB, 873x881px
>>7780345

>AI researcher
>Asks for wisdom for teenager calc 1 dropouts on sci
>>
>>7781376
You don't know what you're doing. You aren't an "AI researcher", please stop using that term. If you want to learn AI, go learn some calculus, engineering linear algebra and engineering statistics and then go on for something like Bishop's book on Pattern Recognition.

The reason your thoughts go incoherent is that your mass of dribble that you call research is incoherent. Stop.
>>
>>7781409
In the field of cognition, there's no such things as an invalid perspective. There are ignorant ones, uninformed ones, and outright dangerously stupid ones, but none are invalid.

Honestly though, beggars can't be choosers. All I'm looking for is a perspective other than my own. I can learn a lot from that alone.
>>7781412
I've studied each of those fields in depth and then some. I have to assume you're a fan of AI research and you're trying to protect your image of what an AI researcher looks like by refusing to accept that we're a bunch of kooks. It takes a special kind of mind to tackle this problem and it's not always the sane rigorous white-lab-coat wearing stereotype you're used to picturing when you think of science.

If I'm wrong, go ahead and demonstrate your expertise in each of those subjects. It shouldn't be hard to demonstrate your credentials on 4chan of all places, am I right?

Seriously though. My algorithms intelligence is far beyond anything I need to demonstrate to you for the sake of argument. It's literally beneath me to discuss it and I absolutely will not bother discussing it with you unless you have at least two years of programming experience under your belt. You need to earn that discussion if that's the discussion you want to have with me.

Until then, I'm dealing with a high level symbolic logic problem. If you don't have an understanding of semiotics or a willingness to gain such a thing, don't bother posting here. I'd rather have an empty thread with no replies than ad hominem shitposts.

>>7781235
>>7781271
>>7781318
>>7781376
There are the topic relevant posts for those that're interested.
>>
>>7781435

Here's a valid perspective: You're fucking retarded
>>
File: 1451829440502.gif (36KB, 398x376px) Image search: [Google]
1451829440502.gif
36KB, 398x376px
>>7781376
>The main problem I keep hitting when I try to write an algorithm is my thoughts go incoherent and there's no obvious way to reason about how to get them coherent again.

This means you need practice.

>Basically I keep hitting the upper limit of my brain's current capacity to contain state bits that correlate to actual logic/reason/meaning.

Pretending to be a robot isn't real introspection.

If you're serious about AI, study AI. Then apply whatever else you think may be relevant.
>>
>>7780345
>AI researcher here
When are you going to make me a robot anime wife you faggot?
>>
>>7781435
>My algorithms intelligence is far beyond anything I need to demonstrate to you

Really? I fucking disagree. Give me something basic that shows you know what you're doing. I have a background in computer science, competitive programming and mathematics, so that's what I can understand.

Until now you have not said a single thing that makes me believe you aren't some pseudointellectual autist with absolutely no idea of what you're doing. You're a troll, everyone realizes you're a troll and you've been doing this in many threads with no indication of the contrary. This last post is the cherry on the cake, an absolutely pathetic "oh i'm way too good so I don't need to prove it".

Give me anything or fuck off.
>>
>>7781451
If you truly believe that, then it is valid to you. This is, to me, evidence that you are cognizant and can make your own decisions. You also have a human body and exhibit the same characteristics as all the other humans. I happen to be a human too. By trivial induction, we can see that I'm cognizant and can make my own decisions. Now, since this IS induction, there's a chance our conclusion is going to prove to be wrong at some point.

Yes, I'm trying to make an AI that can make mistakes and be wrong the same way a human can. The ability to be wrong is a property of general intelligence as I currently understand it. It is, in some ways, what makes us who we are. The flaws matter just as much as any notion of formal "perfection" ever could.
>>7781470
>This means you need practice.
Precisely. That's when I stop programming and get back to the drawing board. The more I practice the more I find that I need to spend more time on the drawing board and less on the algorithms. I'm here because I've discovered many discrete pockets of ignorance in my reasoning.

>Pretending to be a robot isn't real introspection.
And this isn't one of them.

There's no particular reason to believe introspection will help us understanding machine intelligence. There's no compelling reason to think we have to pretend to be robots in order to empathize with their digital perspective. In like manner, and for the same exact reasons that we can dismiss the notion, we also have no reason to believe that introspecting and pretending to be like a robot won't cause our brains to simulation highly accurate depictions of machine intellect.

The hard problem of AI is that we don't know how alien they're capable of being. My bet is that they can get very alien, but not so alien as the science fiction gets. I don't believe in living caricatures.
>>7781473
When I can be sure she won't go yandere on you.

Hell, you'll be lucky if we can get tsundere in your lifetime.
>>
>>7781492

Here's another valid perspective: You talk like a fag and your shits all retarded
>>
>>7781483
>I fucking disagree.
It's not in your capacity to disagree with my skills. You can doubt it as you please, but there's no way for you to magically suss out such a thing with the posts I've made so far.
>I have a background in computer science, competitive programming and mathematics, so that's what I can understand.
Then that's what I need to know. There's no point for me to argue my authority to someone who has not a shred of competence to measure it to begin with. I hope you do realize your last post was an ad hominem with zero authority behind it. I want to drive that point home because it's a noisy one that I feel no need to be patient for.
>Give me something basic that shows you know what you're doing.
Fair enough. Your background makes you competent enough to judge my algorithms competence. It doesn't give you a magic way to ascertain which direction the field of AI research needs to go in to reach critical success. I'm not asking you to agree with the research path I've chosen, but I will ask you to respect that I chose it.

I'm a systems and game programmer. I self taught around 15 and I've been coding and learning ever since. 25 now, so that's ten years experience. I've written algorithms for just about everything short of 3D animation, and I've dabbled in VRML as well. I can list all the languages that I know, but none of them will give you competence in my /algorithms/ faculties so I'd rather not distract the matter with that.

My offer to you is a generous one:

What algorithm would you like to see me code?

As far as I'm concerned, every algorithm I've written so far was just practice for every algorithm I have yet to write. This isn't a science you just go and learn one day, it takes constant practice. You're lying to yourself if you claim you've never sat down to write code and drawn a blank about where to start or what to debug.
>>
>>7781516
>I'm a systems and game programmer, self taught
>I know many languages! :^)

top kek. linear time suffix array, now.
>>
>The main problem I keep hitting when I try to write an algorithm is my thoughts go incoherent and there's no obvious way to reason about how to get them coherent again.

>Seriously though. My algorithms intelligence is far beyond anything I need to demonstrate to you for the sake of argument.

Ok m8

>I don't believe in living caricatures.
https://www.youtube.com/watch?v=1fzXmJyolfY&t=0m33s
>>
popsci retard the thread
>>
>>7781519
>document.getElementById("m7781519").textContent.length
>128
Neat. This won't take long.
>>
>>7781544
>your post has 128 characters

are you fucking with me now?
>>
>>7781556
Just found it neat. I'm writing the algorithm now.
>>
>>7781519
not OP

Essentially you are asking him to google the linear time algorithm that already exists and re-implementing it?

Because just intuitively discovering such an algorithm would not be a "now" proposition.

In that case, what does having him recreate the algorithm prove?
>>
>>7781562
the skew algorithm is pretty nasty, I want to see what happens
>>
Sparse distributed representations
>>
>>7781557
...I guess it would've helped to point out that I'm using that post as the sample text to develop the algorithm. I chose JavaScript because I can just work in the JS Console without switching tabs.
>>
>>7781598
oh, you should generate text with 10^6 or so characters and expect it to run in a couple of seconds
>>
>>7781642
Yeah, this is just to get started. We can test the linear time aspect after it's written.
>>
>>7780345
Where do you come from?
(Please no murica faggot)
>>
>>7781516
basic stuff:
>code an NMC in matlab or python
>code bagging ensemble in matlab or python
>>
>>7781655
I'll pick it up tomorrow. Interesting problem though.
>>
>>7782709
Shit man, you don't need to do this to yourself lol. I was expecting you to look at it and (if you had vast algorithm experience like you said) realize it's a hard problem.
>>
File: zahlen-wellen-lambdoma-1.gif (168KB, 600x600px) Image search: [Google]
zahlen-wellen-lambdoma-1.gif
168KB, 600x600px
>sci or popsci
>>
>>7783123
>realize it's a hard problem
And then what? I don't offer a challenge that I don't think I can handle. I'm not focusing on the linear time aspect yet, I'm just writing the algorithm as I naturally feel I should. It means something to me to publish my own code, written in my own style. If this is the best way to get my research taken seriously, it would be well worth it. Besides.

I like solving problems.
>>
>>7780990
You try to give the spark of self to something that had no self before.
>tfw you think he's not playing 'GOD'.
Plebs like you need to get the fuck off /sci/.
>>
File: 1433043912518.gif (845KB, 500x352px) Image search: [Google]
1433043912518.gif
845KB, 500x352px
>>7784834
>Makes joke
>Belittled for joke
Well you know what sort of people they say don't get jokes? The autistic kind.
>>
>>7784854
Has it occurred to you that the joke wasn't funny?
Thread posts: 66
Thread images: 10


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.