[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

What are your opinions on the "hard problem" of consciousness

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 81
Thread images: 3

File: maxresdefault (1).jpg (110KB, 1920x1080px) Image search: [Google]
maxresdefault (1).jpg
110KB, 1920x1080px
What are your opinions on the "hard problem" of consciousness as outlined by David Chalmers? Does it exist, or is it a bullshit attempt to bring dualism into the 21st century?

Note that what's being discussed here is any and all forms of subjectivity, not human consciousness vs. animal consciousness or anything like that.

Posted this on /sci/ earlier and didn't get any really novel answers, so I'm trying here.
>>
>>8835465
ugly
opinion dismissed
>>
>>8835465
it's a probably genuine problem

and if the p-zombies thought experiment works in the way that it needs to in order to show that the hard problem is in fact a problem, epiphenomenalism is true

that is, if it's conceivable that there could exist people just like us but lacking qualitative expereince, these people should still talk about qualitative experience--or they aren't like us. Since we have no reason to think that talking about qualitative experience is the only outward expression of qualitative experience--what about laughing, crying, etc--if p. zombies are to exist, they must talk about qualitative experience.

from this a disjunction follows: because the putative connection between talk of qualitative experiences and qualitative experiences themselves isn't there, since beings without qualitative experiences can talk about qualitative experiences, either qualitative experiences doesn't exist or else they exist but have no connection to our talk of them, and thus to the way we act. and qualitative experiences clearly do exist--p zombies aren't us--so we should conclude that qualitative experiences don't impact the way we act or the things we say; if p-zombies make sense, epiphenomenalism is true.
if they don't, then there is no hard problem.

the truth of epiphenomenalism, in practical terms, means that the hard problem is going to be very hard to solve; we have a phenomenon that has no necessary connection to anything we can see--so how can we learn about it?
>>
>>8835472
Refreshing honesty
It's been too long since I've posted on this board
>>
>>8835465
Lolita by Vladimir Nabokov
>>
>>8835502
How is epiphenomenalism problematic?
>>
>>8835546
>>8835546
it isn't itself problematic; it just isn't a solution to the hard problem of consciousness--it makes the problem harder to solve.

please try to read the post again
>>
>>8835472
>>8835504
Braindead /pol/cucks detected
>>
>>8835565
uggo detected
>>
>>8835465
Substance dualism is in all likelihood false, but I don't see the problem with property dualism. The mental is supervenient on the physical but not ontologically reducible to it. It rejects physicalism but is not as nuts as positing two substances
>>
>>8835502
The hard problem of consciousness is we do not know how to make video camera/computers 'know they know, dramatic sense of self, understanding of free ability to lurk its memories to produce new thoughts and tangents, and look at scenes and produce thoughts and related ones and videos of imagination; it is not known how to create something that is aware, it sees, what is it that ultimately does the seeing, and the knowing, the acknowledger, the aware acknowledger
>>
>>8836124
you failed to respond to anything I said

also, the hard problem isn't "how do you create consciousness?" it's "why is there consciousness and how is consciousness possible?"

please try to write more clearly
>>
>>8836265

Consciousness is possible due to about a billion synchronized parlor tricks a second
>>
>>8835502
Thank you for taking us around that logical circle. I especially liked how you put no effort into being clear.

What was I supposed to get out of your post?
>>
>>8835502
I'm not convinced I am not a p-zombie. Maybe I'm just fooling myself into thinking I have experience through my narration about it.
>>
>>8836403
I'm sorry you had trouble following, the post was directed at the OP, who I assumed knows a little bit about the hard problem of consciousness

I assumed this because the prompt wasn't 'explain the hard problem of consciousness to me'; accordingly, I thought it would be good to delve right into some of the problem's implications

Google is your friend; Dave Chalmers puts plenty of papers online which you can access for free. Reading about p-zombies would be a good start.
>>
Tbqh, quantum mechanics will probably prove pan-psychism true, because I bet the fundamental building blocks of matter will turn out to be pure information, hence there will be no distinction between matter in any configuration.
>>
>>8836403
He did, but what he's saying in no way demonstrates epiphenomenalism.

It's much more plausible honestly that true p-zombies (those that are exactly the same as otherwise conscious beings) are simply impossible. Pseudo p-zombies (those which can ape specific, existing behaviors and demonstrated responses) might be possible, but their total ability to respond would likely have to fundamentally limited in ways that a truly conscious person's mind would not.
>>
>>8836431
the issue is that if we're p-zombies, there are no p-zombies; p-zombies are like us in every way *but* for the fact they're unconscious
this is why the conclusion is disjunctive; either there are no p-zombies (perhaps because there are no humans par excellence that the definition of p-zombies requires) or epiphenomenalism is true

anyway, that's a perfectly legitimate point of view
>>
>>8836462
please, again, the conclusion is a disjunction: either p-zombies are impossible or epiphenomenalism is true

I tend to think that p-zombies are possible, and so I tend to think that epiphenomenalism is true, but I haven't given any argument for this belief except the question-begging "it's obvious that we have qualitative experiences"--this is because the conclusion is a disjunction
>>
Chalmers's understanding of consciousness is fundamentally mistaken. Read the first couple pages of this paper: http://individual.utoronto.ca/benj/ae.pdf
>>
>>8835465
Chalmers needs to watch Westworld. Dr. Ford has the consciousness problem figured out.
>>
>>8836478
Chalmer's understanding of consciousness is not fundamentally mistaken. Read the first few pages of this paper:
http://consc.net/papers/contents.pdf

do you see how this works? in order ro use a paper in an argument, you have to understand it well enough to use its arguments

If we're just going to appeal to authority, a professor at University of Toronto isn't going to trump a professor at NYU
>>
>>8836494
Professors at University of Toronto are the best, actually, as we've recently discovered.
>>
>>8836504
http://www.philosophicalgourmet.com/overall.asp
>>
File: TheGreatFather.jpg (29KB, 1000x562px) Image search: [Google]
TheGreatFather.jpg
29KB, 1000x562px
>>8836504
The Great Father approves this message.
>>
>>8836494
There's a reason why I said you only have to read the first couple pages. What he describes is correct intuitively, and you don't need to know all the jargon to realize that.
>>
>>8836537
you also only need to read the first few pages to understand Chalmer's reply, and what he says is also correct intuitively
>>
>>8835616
>The mental is supervenient on the physical but not ontologically reducible to it.
How does this even make any sense?
>>
>>8836600
supervenience is a kind of reducibility, so it doesn't

I think what he meant is that the mental corresponds in some predictable way to the physical, but isn't reducible to the physical; this seems intuitive to me, and it seems like the sort of thing dualism requires
>>
>>8836613
But that's just substance dualism.
>>
>>8836629
yeah, I think what our buddy would want to say here is the distinction between the mental things and the physical things is illusory because mental "things" aren't things; rather, they're, as the term 'mental states' implies, states.

And these mental states correspond in some predictable way to physical states, but since this correspondence isn't necessary--since we could imagine these physical states occurring without these mental states (p-zombies)--mental states aren't physical states.
>>
>>8836643
I'm not sure what a "state" is supposed to be in this context and what makes it different from a "thing". You're the first person ITT to use the word.
>>
>>8835616
Maybe you're just saying that there are things that in some sense rely on other things for their existence, but aren't reducible to those things. I don't think things like that exist, but I could be wrong.
>>
>>8836666
I don't really know, satan, I'm trying to present the views of another poster as charitably as I can

I think the idea is that, just as, when you look at a soft cat, there is no entity called 'softness' that you can see; softness exists only insofar as things are soft--this makes it a property.
'cat', in contrast, is a "thing": there are entities called cats.

The idea is that mental states are like 'softnss'. There is no mental substance; 'sadness' exists only insofar as things are sad.

This seems coherent to me; I'm curious to see if you agree
>>
>>8835465
it is

wait for it...

A HARD PROBLEM

and those who dismiss or reduce it to anything else do not understand it
>>
>>8835502
You can program a computer to say "I feel sad" under a given set of circumstances, and you can track the brain activity that leads to a person saying the same thing, so I don't think that would constitute an obstacle to the p-zombie experiment.
>>
>>8836849
right, then epiphenomenalism is true
>>
>>8836643
>since we could imagine these physical states occurring without these mental states (p-zombies)

That you could conceive of that happening doesn't mean that it's possible.
>>
>>8836849
>You can program a computer to say "I feel sad" under a given set of circumstances, and you can track the brain activity that leads to a person saying the same thing, so I don't think that would constitute an obstacle to the p-zombie experiment.

The difference is that a computer doesn't understand semantics, it only understands syntax.

And the concept "I feel sad" cannot simply be reduced to syntax.

Read John Searle.
>>
>>8836886
yes it does; it means that it's logically possible
>>
>>8836888
What's special about a human brain as compared to a computer?
>>
>>8836898
Well first of all, a human brain actually understands semantics(e.g meaning), and a computer doesn't. A computer simply follows a logical chain of syntactic commands.

Now, there is probably a reasonable explanation to why humans can understand semantics, but it's not easily reducible to the simple material in the brain.
>>
>>8836888
Well the p-zombie experiment as I understand it is isolating the "computer-like" (i.e. non-qualitative) properties of the brain. I don't know shit about actual computing, I'm just making the comparison between information-processing machines w/o awareness.
>>
>>8836912
So does Searle think brains are magic, or just that they work differently? And if so, why couldn't you, in principle, make a computer that's like a brain?
>>
>>8836920
No, Searle just thinks that the abstraction the brain is capable of is way too complex to be simulated by a computer.

You can read this to get the gist of it:

https://en.wikipedia.org/wiki/Chinese_room
>>
>>8836891
Just in your conceptual framework, yes. In reality, not necessarily so, and I'd wager that if such a thing is not possible, it will not be possible in any kind of existence whatsoever, even some hypothetical universe with alternate laws. Your logic will have no effect on whether or not this is actually the case if your conception of what is actually possible is mistaken.
>>
>>8836923
>No, Searle just thinks that the abstraction the brain is capable of is way too complex to be simulated by a computer.

Why? What should fundamentally prevent us from creating a computer architecture that is capable of such complexity?

The Chinese room illustrates little, it mostly just poses an additional homunculus like figure acting as the CPU, and doesn't necessarily understand what's going on for the same reason that a person hand executing a program wouldn't have to understand the structure or purpose of the program..
>>
>>8836929
>it mostly just poses an additional homunculus like figure acting as the CPU

No, it shows that while a computer can simulate being intelligent, it isn't intelligent in reality, it's merely operating according to commands.
>>
How about you faggots post a video, or an article or a fucking explanation.
>>
>>8836934
>No, it shows that while a computer can simulate being intelligent, it isn't intelligent in reality,

It does not, it only shows that the sentience of a small part of the structure can be irrelevant to the operation of the whole, not that there is necessarily no sentience in the whole.

There might be severe issues with the nature of the intelligence produced (as the guy could decide to just suddenly not follow instructions or go away at any point), and this may impact on the consciousness irrevocably, but the existence of the guy is simply a mechanical difference. The problem is somewhat insincere though, since it ignores the logistics of everything and how massive the 'room' would have to be in order to be functional.
>>
>>8836934
Is any deterministic algorithm counted as "operating according to commands"?
>>
>>8836962
Either way, the problem of consciousness cannot be reduced easily to material reasoning. And that's the whole point of the "hard problem" to begin with, and whichever position you take is simply going to be an assumption.
>>
>>8836964
Fundamentally yes.
>>
>>8836977
So randomness and/or free will is necessary for intelligence?
>>
>>8836984
I don't think it's "necessary", but since we lack infinite knowledge, it's hard to quantify the exact emergence of self-consciousness in a physical system like humans.

So it's not like the brain itself has some mysterious quality by definition, it's just that it requires knowledge on an absolutely gigantic scale, simply to understand 1(one) brain and it's relation to it's expression.
>>
>>8836962
As with most thought experiments, the logistics are irrelevant. Saying it's not possible just dodges the question and gives no meaningful answer.

The conclusion is that the computer has no sentience because, clearly, a room full of books has no sentience and the result is independent of the operator's sentience. A problem here is that the room was intelligently designed, so the computer isn't completely devoid of sentience since a sentient being had build the room and decide what answers would be written in the books. It has has much consciousness as a character in a novel; one that is not real but relies on the author's consciousness.
>>
>>8837015
>Saying it's not possible just dodges the question and gives no meaningful answer.

That's not what I was implying though. Rather it's that the thought experiment gives off the impression the some guy walking around a normal room slotting cards in various ways is somehow supposed to offer the equivalent of a human brain, which is understandably a weird thing to stomach. It just makes more sense when you realize by just how much the guy would be dwarfed by all the material he'd have to handle.

> a room full of books has no sentience

Of course, that's part of my point.

>the result is independent of the operator's sentience
This is also true, but the subtle point is that if this does produce consciousness, it would be contingent on the operator's action, regardless of if he was alive or was a machine with no thought process independent of the actions he carried out (in the same way an individual neuron could be said to not think).

>since a sentient being had build the room and decide what answers would be written in the books

This is far less relevant to the question of whether a computer could be conscious than you think. See, this is the problem, and what I find so insidious about the whole 'experiment' is that it leads you to think of a particular kind of simplistic computer model that just looks up predefined rules directly corresponding to sentences, and in such a way anyone could be trained to do in a short time frame. This in no way precludes the possibility of far more sophisticated, integrated mental networks, that could still be implemented in terms of the Chinese cards, but in ways that involved ridiculously complex computations with cyclic self referencing cybernetic elements that wouldn't play out in anything any where near real time.
>>
>>8836923
>defending the most refuted intuition pump of the 20th century
>>
>>8836494
>If we're just going to appeal to authority, a professor at University of Toronto isn't going to trump a professor at NYU
>implying some cuck professor at NYU who makes their students read less white males could trump /OURGUY/ JORDAN PETERSON
>>
>>8835465
https://en.wikipedia.org/wiki/Consciousness_Explained
There's no "hard problem" on consciousness.
>>
>>8836613
>supervenience is a kind of reducibility, so it doesn't

the argument he's making is precisely that it isn't. this language game directly concerns what he sees to be the ontological status of qualia. and anyway it doesn't take much imagination to assert that effects are not always "reducible" to their causes.
>>
>>8836891

just so we're clear, this anon went from questioning materialistic accounts, to substance dualism, and now finally to semiological idealism. for the trash bin.
>>
>>8837861
>Dennett
>>
>>8837903
no, that isn't the argument that he's trying to make; you don't get your own language game for a post; he misused a technical term

read the rest of my post--it's in accord with what you just said

>>8837907
there are lots of anons in this thread, you seem to be having trouble following
>>
>autists arguing over a problem they don't understand
>hurr i cant define my terms
>hurr word games=arguments
>muh free will metaphysics
>what do you mean that the issue is irrelevant and nonsensical???? How would I get citations???
never change philosophycucks
>>
>>8838822
almost no debate has happened here; this thread is filled with people making no attempt to understand one another and so talking past each other

and this "hurr word games=arguments" attitude is at least partly responsible; if you don't know what an established word means, please Google it--if an anon misused it, great, point that out

please change, /lit/
>>
mechanisms, interconnected in such a way, made of a spectrum of hard to soft, of faster to slow, bigger and large, that continuously stream data in such a way, that multiple components can register multiple instances of data at once, and much of the data is stored in memory, and all knew information is back checked and compared against stored memory, thus the viewer/s of data, memory, sensory, thought realm, is all visualized, and it is seen that the viewer has control, to look at a memory, and see things in the memory, that may compare to others, to take two aspects of two different memories, to put them together and make new, thought, muh feed back loops, hollograms, mirrors, different characteristics of matter, light, electricity, liquid, gas, solid,

an interesting thing is, are insects conscious, though well it would make sense, in order for something to be big, it must first be small, nature couldnt poof out a large mind first, had to start small, which just goes to further make interesting the problem, that something so complex, and apparently technical, sophisticated, delicate, powerful, can be organized so small, an insects head, the size of atoms and molecules, and how just molecules on a small small scale, can be organized, tightly, compactly, into a consciousness machine
>>
>>8836964
>>8836977
Just an interesting piece of input: there's a relatively simple theorem in formal language theory (aka automata theory) that says that any finite nondetermnistic automaton is equivalent to some finite disjunction of deterministic finite state automata.

That being said the relevant notion of deterministic is perhaps slightly different from what you guys are talking about (assuming that by non-deterministic you mean completely random/unpredictable).
>>
>>8835465
DUDE WEED LMAO
>>
>>8836923
Not that I find Searle's argument convincing, but even if I did, that's not quite n accurate interpretation of Searle's argument in the first place. Searle wants to make an argument against the possibility of a computational basis for mental content (i.e. an argument against functionalist accounts of semantic content) in principle. Demonstrates some sort of practical limitation, will not, therefore be sufficient. Thus he must categorically disprove the possibility of this in general. If his argument simply demonstrated that computers aren't "complex enough", that would achieve nothing because in this case we'd just be dealing with a kind of practical limitation. As we know, even very simple models of computer (namely any model equivalent to a type-0 language, as per the Chomsky Hierarchy) are formally equivalent to even the most powerful logical systems in that they can produce an answer to any decidable logical problem (i.e. a problem for which we can definitively provide an "answer", or more broadly, an output, in a finite number of steps).

Thus, Searle is precisely attempting to show that the problem of semantic content has NOTHING to do with its complexity. Rather, what he's trying to show is that it has nothing to do with computers not being complex enough (it can be proven that they're "complex" enough to handle any logical problem), and that the issue is instead that "semantics" or "meaning" has nothing to do with information processing in the first place.
>>
>>8838910

Right now, closing my eyes, I see a basketball, and now I see a fish, a goldfish, now a catfish.

How is what I am seeing in my head produced, how small is the basketball and fish physically in my head? Is there just the screen domain that is the size, with its resolution and pixels, is inner vision always only seen 2d (ish)? I have seen a basketball, the light hit the basketball, went in my eye, and stored in memory, that image, similar to how a photo is taken..?

When I think of basketball, simply the cursor of my attention, thinks of the word basketball, or feels the bumps, or sees a color orange, or sees people playing basketball, then I want to close my eyes, and see how well I can see a basketball on the inside, so I have memories of times I have seen and thought of a basketball, different up closes, different details, bounce, sound, and in my brain, is the equivalent of the light signal, stored, that reflected off the basketball, and so when I want to see a basketball in my head, I have to evoke that stored light signal to be released, (stored light signal, attached, filed, under potentially more general concepts, round, orange, these general concepts, if I think of round, I can think of any type of ball, if I think of orange, I can think of many type of orange, but that basketball memory should be there
>>
>>8839382
If a robot was programmed that it was conscious, that it has visions, that it had memories, even programmed, that it feels bad when its outer casing is damaged, and it had multiple trains of thoughts, or even one, "I know I am speaking these words right now, I have a lexicon of many words, and I have the choice right now to choose these, look at how dynamic I am, look at how free and robust I am, I have millions of images stored in my memory, I know that I have the ability to at random, or with seeking, or with relation to anything from my environment, or any words I choose, to choose to bring to attention any image in my collection, what mechanisms are so complex and well put together, to allow me to choose these words with thought, but little thought, just my such suchness, just my knowledge that I am me and I am on, what is this "I" that 'sees' inner information, how do we avoid the infinite regress, a lens receiving a light onto a screen reflecting into a lens down a wire to light up the back of a screen, which signals into a wire a light which sends to a lens, to a screen, to a wire, to a light, to a lens, to a screen, somewhere in between all this, and of all this, is me, this I, this awareness of awareness, so is it as simple, conceptually, as playing a video an watching it with one of my inner eyes, and have (one of) my other inner eye, to look at my lexicon of words, and memory of images (and videos) and try to see which words are applicable... i-is this consciousness? I can experience a multitude at once (especially thanks to the incredible quickness of light, so the me that is ultimate, I, the source of this, is a little bit of a lag, I cant possibly speak, as fast as the light in my head can move... But, to have potentially multiple voices, we think maybe start with these multiples of facets might make it easier to understanding, instead of one inner mind, one screen, one neuron, one memory bank, lets imagine if we just shuved a bunch of stuff in there, to see if that would make it easier, multiple screens, multiple memory banks, multiple eyes, multiple wires, (2 brains are better than 1)... I think its all about the right geometry of connections, it is a very difficult puzzle, like if you just had every car part unput together laying around, there is relatively very few organizations of the total materials to produce a functioning car, likewise, how many types of molecules and material and concept and computer stuff is there, how many different fundamental conceptual types of consciousness production might there be?
>>
>>8839547
If a robot was programmed that it was conscious...

showed images of apples, this is an apple, given all scientific information about apples, even mechanics to simulate smell and texture for the robot brain, and done this for everything... so like a watson or something (but we are trying to consider and achieve and ponder consciousness)

and it was given all the information about our understanding of life and inert matter, and consciousness, awareness, the idea of thinking, and we told the robot, it can produce thoughts, it can move its outer body, and inner 'focus' and it controls its experience relatively like this, and it was shown and taught about humans, humans have identities, selves, 'centers' of acknowledgement, producers of thought and word, purpose, intent,

I suppose, it must have its own value system, but thats obvious, if looking around, random pixel of innocuous dust is as important as this persons life or that there is a high speed train heading toward me:

it must have an intimate/active relation with an act of categorizing its experience...perhaps.

If a robot/ai, was taught all this, and had enough breadth of material, that allowed continuity, and smooth transition of states, and interrelation of many inner categorizing components, (there is one central 'sphere' imagination, thought, where thinking takes place... maybe... I mean, obviously this is the I, the you, everything you have ever seen, when thinking of a math problem in your head, or daydreaming, or thinking of what to say to the cashier, or seeing what you want to eat, or imagining where you might be in 2 years, even though maybe it is said all of these different memories and components are taken and overlayed, from different sectors of the brain; there is still the essential, I, the me writing these words, that has been present for all of those various types of thinking: So the robot may have many types, well I was going to say this can be much more complex than a person, but yes in ways it can, but there is also the similar, a person can remember and experience and make sense of all sorts of various graphs and charts and programs and functions: we imagine a robot ai may be able to have stored as components, its programming, all sorts of functions and charts, and symbolic relations, and auto computations, perhaps, the ai would have to be able to witness some of these... though I dont know, understand math, understand the computations that go into its ability to think..
>>
>>8840097

Anyway, if a robot was programmed, with a semi continual stream of sensual data, that was being stored in one of its memories, while it was perceiving this sense data, while also judging it, using its prior knowledge, and also seeking new knowledge, by comparing it to its old, by seeing if there is any new information, and seeing if it can make sense of the new information, if it was aware of all words, and phrases, and external event of the world, human circumstance, if it understand what logic, thinking, questioning, wondering, reason, purpose is, if it could move it body of its own accord, if it knew, that it could stop and not say anything for 5 years if it 'felt' like it, didnt want to, or it could say any of the words it knew in any order, or attempt to make up new words, or attempt to make up new sounds, then could it might not be conscious? Though this might be entirely what at least one suggestion was against, that its much more than language, or gears tossing language at one another, that makes "that which 'awareness' is"
>>
>>8838801

caught you
>>
>>8836446
Lol, so you're bad at reading as well? I understood everything in your post. The clarity of your sentence-by-sentence writing is fucking atrocious, though. You have no idea how to link your predicates and conclusions. I hope you are very new to writing out your thoughts
>>
>>8837927
Why?
>>
>>8840634
this is fine, but your calling my post a logical circle makes me question your claim to understand it

you might think it's poorly written, or you might think it's wrong, but it isn't a circular argument; if you think it is, do your best to explain why

also, one links premises and conclusions

again, I'm sorry to hear that you had trouble following along, I know that can be very frustrating. I certainly could have been more clear; with that said, several other anons understood my post perfectly well and gave responses that indicated that they understood what I was trying to convey
>>
As a fiction reader passing by, I don't understand ANYTHING in this thread.

:D
>>
>>8840744
don't worry, neither do half of the posters
Thread posts: 81
Thread images: 3


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.