What questions would you have an incredibly advanced and powerful being ask the party to determine if they were sentient or not?
Presumably some ancient being that was shitting around in some dungeon that the party stumbles upon. Oldie has never seen humanoids before and is trying to figure out what the fuck he's dealing with.
The entity from beyond the stars glides down from the pillar of light and touches down gently before you.
He speaks with a booming voice that shakes you to your very core.
Entity: Describe what Marsellus Wallace looks like!
Entity: Doe she look like a bitch?!
If I had a suspicion that it was a sapient genius squirrel, hell yeah I would. I'd talk the shit out of that squirrel. And clearly this thing at least suspects sentience in the party or he wouldn't be asking in the first place.
You are walking through the city when you come across a child begging for food. You have food in your possession. The child is unable to take the food from you, and you are not compelled to act in any way. You might give the food to the child, walk away, or perform any other action as you choose.
What considerations would you make before acting?
Sentient creatures will object to having their various orfices invaded. Or they will enjoy it.
Either way shows some degree of sentience.
A TV doesn't care if you rape it ( trust me).
What is your place in the world?
<place not meaning physical location is an abstract concept
<world can indicate the creatures understanding of an area greater than what they have personally experienced
<any philosophical language can indicate that this species has spent a considerable amount of time not digging in the dirt or chasing other beasts and instead thinking
>Why would he have a conversation with something that might not be sentient.
Look at your own question. It "might not" be sentient which implies that you have reason to believe it might be.
>Would you converse with a squirrel?
No, because I have no doubt of their lack of ability to have a conversation.
The implication is that we have a powerful being who isn't sure that the PCs aren't something like Mynah birds.
Look, somebody actually, knows the definition of sentience.
Sentience is not rigorously defined by the cognitive science of which I'm aware, but the general usage is that it's "awareness" of the world around them; perhaps the ability to model the world around them and place themselves as a model in it, rather than robotically responding to stimuli.
Sapience is the quality of being "intelligent" in the sense that humans are and other animals mostly aren't. Of course, it might just be a matter of degree rather than specific qualities. Plenty of animals use tools and even use something like language. Meta-cognition is used as the bench mark by some.
Nigga got scared cause he knew some swole nigga bout to roll up on him.
The fuck, man you ackst me a question, course I fuckin answered it, my momma didn' raise me to be rude. Niggas up here be trippin, shit.
It could have a test similar to this.
A surface is displayed with four colors on it. Four different tones are played, each tone corresponds to a color which brightens as the tone is played.
This is repeated once.
The third time, the colors are split into sets of bars, l, ll, lll, llll.
This is repeated once.
At this point, the display is reset to the colors and the tone is played again but the colors do not light, it is up to the organism to in some way indicate which color corresponds to the tone.
This is repeated once.
Then the colors are replaced by the set of bars, and again the tone is played twice.
If the organism can match up the tones, colors, and bars then it is highly probable that it is sentient because it can recognize patterns, form associations, solve incomplete patterns, and understand the basic concept of numbers.
Through the power of the human hivemind, someone already made an infographic about this topic.
Thier puzzles don't need sentience to solve, you do know they had a bot work on the debugging and puzzle solving right?
scripted pseudo-intelectualization of retarded teir sophistry is not sentience determining either
Honestly the only reason a sentience/sapience test would be difficult is if your higher being is of an incredibly different structure. You or I could easily, easily devise tests for creatures that are made of flesh and move around and interact with things mostly like us.
But what if your higher being communicates through pheromones? Or they're a crystalline being that only perceives the world by bouncing tones vibrated out of its body, and thinks you're all stupid animals because you can't respond to his tones in pitch perfect clarity.
If it's basically just a million year old super wizard elf or some shit he either quickly figures out your party is sapient or he's a pretentious jackass who believes in the idea of philosophical zombies
Or shit, if the species is technologically advanced enough for rapid interstellar travel, then its highly probable that their technology is basically space magic as far as humans are concerned, and they would probably be able to determine our sentience through means we either can't use ourselves or don't even understand.
Or hell, they might be of such near-transcendent intelligence that to them we aren't really any more sentient than an ant or lower animal would seem to us. To them we might seem like automata following a slightly more complex program than every other automata around us.
I don't know, but I would definitely ask it if it could prove it was sentient back.
Don't feel bad, it was a stupid fucking dog.
It totally would have let you stuff a gas pump down its throat.
But nothing will let you stuff something up its ass without first consulting it, that's just a knee-jerk reaction to getting butt touches.
I know what everyone's getting at, but context always helps.
And therein lies the rub. We've got animals who are smarter than Goddamn ants; crows are fucking brilliant compared to other animals, and clearly have some sort of system of communication because if you piss off one, he can spread the word about which human is a huge jackhole. But you know. There's people who'll go "Yeah and? That's just a more complex trick" and then above humanity you'll have beings advanced enough to go
"Just because it has written language doesn't mean it's sapient like you or I. Thats just a dumb stumbling of evolution. Scrawling with a stick. Oh, how many dumb animals have discovered nuclear fusion? Pfffff. Come back when they can communicate verbally on multiple dimensions."
But autonomic reaction doesn't at all prove sentience. Lower animals all function that way, like machines they have programmed responses to specific stimulus and unerringly carry out that reaction. Flies and other insects flying into lights are a good example of this, they don't do it because they like light or want to be near it, but because they are programmed to orient themselves towards the light.
To prove something is sentient, you've got to prove it can do more than just follow preprogramed instincts. Pattern recognition and long term memory for example, or the ability to solve puzzles that demand some level of planning.
OK, this is easy, it's writing.
Light or sound? How technical are you being when you say "dimensional?"
Poking people with a stick? Electricity? a series of dots/lines?
And then you get into bullshit neuroethics arguments about how, if you think about it, our personalities, minds, etc, are just more complex preprogrammed instructions conflicting and interacting.
You can always have a more advanced being argue "they're just following instructions, they're really just a fleshy machine"
>solve puzzles that demand some level of planning.
I preferred the original thread.
How do you say 'We come in peace' when the very words are an act of war?
Well, I guess the humans the thing meets wil be clothed/armored, with different clothes/armor, that's a start.
If I ever saw a group of squirrels wearing clothes and using tools, I would consider tryibg to talk to them, or at least try to interact, since they'd seem smart enough to be sentient.
Look up Koko the gorilla
Humans continually deny that she is capable of communication and she knows ~1000 signs in ASL, has opinions, and can attempt deception, like blaming her kitten for breaking some ceramic fixture
Some humans still claim shes just "acting" and not sentient
The same thing could be applied to humans. In the case of the Chinese room, the human brain is the entire room itself, with human being one of its components. The components of the brain communicate with each other and then send out a response via electrical signals to another part of the body.
However the single part of the brain is not sentient, the entire thing is.
By comparing it to a machine, the language processing is not the part that is sentient, it's the machine itself. The entire room.
>By comparing it to a machine, the language processing is not the part that is sentient, it's the machine itself. The entire room.
Searle's argument is famously about the differences between the way a computer program and a brain utilise symbols, so I'm confused how this follows.
>comminucate at least on the human level
This is a very bad way of putting it. What is 'communicating on a human level'? While the chinese room may be adept at replying to, say, requests to choose a takeaway, questions that ask the chinese room about its future or about specific individuals not contained in its libraries will yield nonsense results.
Searle's argument claimed that a machine cannot be sentient because the human does not understand the characters themselves, he only matches it.
However, this is exactly what the human brain does. It matches characters to words, which is then matched to memories, then it calculates a response.
In this case, the character recognition part of the brain is the human in the locked room. That part is not the part that is sentient, but the entire room is.
The machine is exactly the same. The human in the locked room is only a part of the machine, it is not the machine itself. It is incorrect to state that just because the language processing cannot "understand" what it is reading, that the entire thing is not sentient. Otherwise you could make the same argument that humans are not sentient, we are just a collection of neurons firing at each other.
>However, this is exactly what the human brain does. It matches characters to words, which is then matched to memories, then it calculates a response.
Check back on Searle's second axiom. He explains it better than I could. I've attached it his explanation here because I'm used to seeing people arguing against versions of the argument poorly distilled from the introduction of the wikipedia article.
Yes, the Chinese room idea. Because a room big enough to contain everything is clearly less intelligent than someone that understands Chinese.
This would be a lot easier to discuss if you all could keep sapience and sentience straight. I'm going to assume OP means sapience, but under the wording of his problem literally any question that they'd react to is a test of sentience.
Anyway, the easiest way to do it? Ask them to solve a simple puzzle which should have an easy solution, but make a very clear error in it that makes it unsolvable. If they provide the "solution" or indicate it's unsolvable, they haven't proven themselves sapient, although they haven't proven themselves nonsapient either. Further tests are required. However, if notice the error and try to bring it to your attention, or take steps to correct it to render the puzzle solvable, then they have.
A Chinese person is outside of the door, and puts a question under the door. The non-Chinese speaker takes the question, and asks a Chinese person in the room how to respond.The non-Chinese speaker then gives the response to the person outside the door. Can the person outside the door conclude that the non-Chinese speaker in the room speaks Chinese?
How about this: A Chinese person gives a question under the door in Chinese. The non-Chinese speaker gives the response to a second Chinese person who is in the room with them. The second Chinese person writes down the correct response in a book, and gives it to the non-Chinese speaker who uses the book to give the person outside the door the correct response. Can the Chinese speaker conclude that the non-Chinese speaker inside the door actually speaks Chinese?
Now the last part: The Chinese person inside the room writes a book with all possible questions and responses and gives it to the non-Chinese speaker and then leaves. A different Chinese speaker gives the non-Chinese speaker a question, and the non-Chinese speaker uses the book to give a response. This is identical to the original Chinese room. Can the Chinese speaker conclude that the non-Chinese speaker in the room speaks Chinese?
No. But the entire system that the Chinese person is interacting with does speak Chinese. There is some knowledge of Chinese in that room. Whether that is in the first example where the knowledge is stored in the second Chinese speaker's brain and given directly, the second example where the knowledge is in the second Chinese speaker's brain and transferred into a book as needed, or the third example where all of the knowledge is transferred into the book beforehand. Something in that room knows Chinese.
If we are replacing Chinese with intelligence, then whoever is outside the door might not be able to conclude that the person/machine inside the room is intelligent, but the entire system certainly is.
His definition of software is the most retarded thing I've ever heard.
>The next axium is just a remainder of...
OOP completely negates this, you can attach 'mental contents' (attributes) to any object.
If you wrote down every single interaction within a fictional human brain, down to the fundamental matter, to the point of being a full simulation of the brain written down on paper, would that brain be conscious?
This is an actual thing in my setting, so I'm wondering if it'd be a representation of the thing or if it could be argued to be a thing of its own.
No, it needs a device which can read, write, and execute instructions to be considered intelligent.
Which, if we define "The room" to mean the system within the room rather than referring to merely the physical walls which describe an area, it is; a n intelligent human is part of the system, therefore the system is intelligent.
Of course not. It's ink on paper. A blob of ink in the shape of the word "neuron" is not going to interact with other blobs of ink the way a neuron would interact with other neurons.
Function is a fundamental aspect of an object. A representation which does not reflect function is incomplete, while a perfect representation is simply the object itself.
Now we've reached the hazy grey area of "how much intelligence does it take to be Intelligent."
Prevailing reasoning holds that a computer is not intelligent because it cannot truly modify instructions; rather, it can only be instructed to modify it's instructions in the manner in which it is instructed. The machine is a slave to input and cannot generate output in a vacuum. A human mind, meanwhile, does not need to receive instructions in order to modify instructions, nor does it strictly require that it be instructed in the manner of how to modifythem in order to do so; a coin-counting machine told "Set.Var1=Var1+Var2" where Var1 is the total number of coins will fail if Var1 was not declared with a value prior to operation and return an inaccurate value if it is not reset between tasks, while a human counting coins would establish on it's own that the total should be zero at the start of the task. This degree of abstraction is absent from current computers, and they are therefore not considered intelligent.
On the other hand, one may argue that he human mind does in fact require and receive such instructions, and simply receives them from the pre-made lists stored within itself. Thus, a human is not truly more intelligent than a computer, merely more comprehensive. The topic is still debated.
Its a word we invented, to describe a concept we invented, to define something we observe about ourselves. You might as well say we don't have height. Such as height is, we have it. Sentience is the same., Such as it is, it is something we have. That is all it is. If we didn't have it, it wouldn't be.
I meant more like a series of notes determining that quark #sr346d/cd.ew/2993 was moving at a velocity of X to Y co-ordinates in space and time, but okay. Am I right in assuming the deal is that because that which is written on the paper cannot react to anything by itself, but must instead be manually written down as doing so, it is not conscious?
Basically. The being who records the interaction is the one who acts. The paper is just paper, and does what paper does, which is mostly sit there, molder, break up in water, and burn well.
Wouldn't it be, at least , not following a pre-programmed rules for it, instead of doing the task it was made for if it was doing something else?
Using a human in this, or AI - make them follow a task that leads to a eternal imprisonment - they would have to forsee that with limited clues, act against their instinct, and act 'outside the box'.
It would be way more interesting if the PCs were also testing that being for sapience.
You wouldn't? Dude its a talking animal. All my life I've wanted to have a conversation with a squirrel, a dog, cat, birds, snakes, seals, trees, etc.
That's Laplace demon. Except you can't determine position and energy with perfect, or even good enough accuracy for such particles.
Motherfucking heinsenberg principle of uncertainty.
The amendment is because while I'm not sure i can speak for intelligence itself, at the very least intelligently speaking a language requires memory, so yes at the very least your preprogrammed instructions are not PERMANENT instructions, but frankly the origin of a thing is irrelevant to the nature of the thing all else being equal.
And well, ambition is not some core function to sapience. Why on earth SHOULD it desire to act outside the box?
Present them with a finely detailed depiction of futanari (or perhaps even materialize one, if that's within the being's power). Clearly explain what it is. The ask if a man having sexual intercourse with it would be gay or not.
The word you guys are looking for is sapient, not sentient.
And seriously, it got vaguely referenced earlier, but read Blightsight. Excellent sci fi novel, and deals with this exact topic in some pretty profound ways.
Does a thing have to be concious or self-aware to be considered intelligent?
Or just be able to solve complex problems?
Is there a solid definition of conciousness yet?
>while a human counting coins would establish on it's own that the total should be zero at the start of the task
But the humans are taught the concept of "zero" or "void" through experience and mathematics, or am I overthinking this? I mean, our brains are not born with the concept of zero, we learn it and then it would be capable of abstracting the zero coins state.
Yeah, but that's still learning something we don't know that without it we couldn't perform the task just like the computer. So the difference is not whether we have that knowledge or not, but if we have the capacity to make something out of a new scenario (in that case, of a void set).
Easy, you just do some fucking math with it and they'll figure it out. It's just the Additive Identity
You can use objects to teach the symbols for integers, then use integers to teach addition, and then you can show what 0 is. And then you can try and explain multipliction. 0's multiplicity function shows itself.
>Gentlemen, we've made a breakthrough! We showed the alien a mirror, and it made a series of noises we believe translate to "Ugh god, I look so fat, why did I choose to wear THIS jumper, it's such an awful colour."
>What questions would you have an incredibly advanced and powerful being ask the party to determine if they were sentient or not?
honestly, it would likely be a simple pattern-recognition test.
because the trick here is that you are only testing for sentience, not their level of advancement. So it needs to be a test that would stump a beast; but a sentient being, even one living in the stone-age, could easily solve.
hence pattern-recognition test might be a viable option since it would require that the subject of the test to be capable of abstract thought in order to recognize a pattern.
It's literally a series of dots and lines.
Do I care about the child? Would giving the child food benefit me in any way? If yes, I give the child food. If no, then I don't.
Read the thread you fucking retard. It's already been brought up.
My point was that you don't need to remind the human to start from zero at the beginning of each task. A machine lacking that instruction will count 10,000 coins on Monday, then start counting from 10,001 on Tuesday.
If there is I haven't heard of it. Read/Listen to Destination Void, they explore some of what the author thought would be necessary for a consciousness. I'd say one thing a consciousness definitely needs is some kind of language which can describe abstract concepts, it also needs to perceive the passage of time, it needs to feel both pain or pleasure or some pair of sensations analogous to those, it needs to be able to analyze and recognize its own thought processes (introspection).
>it needs to be able to analyze and recognize its own thought processes (introspection)
So humans aren't conscious? We can't really tell what goes on in our own minds at any meaningful level. We just produce rationalizations where necessary.
I knew it would come to this.
>It was a pretty fucking amazing book
It's ok. It's just that Watts found ONE idea that makes him feel all tingly and just can't have enough of it. While you have had enough of it by the middle of Blindsight. And then the fucker releases more of the same in Echopraxia.
Also - explaining all of his character as in-Universe autists of various flavors doesn't excuse all of his characters from being complete autists.
> surprised more of /tg/ hasn't read it
On the chan I hail from, it's overforced to the point of cancer.
Also pls no more Chinese Room. The problem itself is quite pointless.
I heard his next book wasn't as good as the first one.
And I do think the characters being so fucked up was pretty well explained. I mean, shit, the MCs entire job was to translate the crazy shit all of the scientists say into something that normal people could understand.
"Sentient" literally just means "can experience things, aware" but everyone knows what you meant so I won't split hairs.
Anyway, I don't think I'd really need them to ask any. Just by observing the party using swords to kill things, they've already demonstrated an understanding of basic tool use, and they are clearly wearing artificial coverings and using implements requiring quite a high degree of tool use and craftsmanship.
The question isn't "Are they intelligent," but "How intelligent are they, and are they likely to be good company?"
>I heard his next book wasn't as good as the first one.
>And I do think the characters being so fucked up was pretty well explained
It was. Doesn't make them any less tiresome though.
> the MCs entire job was
SIRI IS THE ONE BIGGEST FUCKING AUTIST OF THEM ALL. I'm not talking about Theseus crew being weird AF, I'm talking about all (trans)human interaction being written as if by someone who has very little experience with human interaction, with weird or normal humans alike.
I guess I felt like that was intentional though. I mean, The entire book starts itself by explaining that Siri is pretty fucked up in the head, and his ability to interact with others is incredibly fucked up. Sort of the irony of the fact that the man who's job it was to explain the reasoning of the crazies was even more mentally and socially fucked up.
>Sort of the irony
More of a plot hole. Considering [SPOILER] [SPOILER] - there was no point in having him in the crew. Or having a crew at all.
>I guess I felt like that was intentional though
to me it felt like author purposefully making sure that his plot doesn't allow any sane characters so that he doesn't have to write any.
Overall the book felt like if someone took the alien chapters of Gods Themselves (completely undiluted by any human presence) and tried to make an autism-friendly version of Solaris with that modern-hard-scifi flavor. Kinda interesting, but not that memorable. Utterly killed in some circles by it's meme status.
Let me preface this by saying I have autism and marginal conscious awareness of myself. I paintstakingly figure out what I am feeling by monitoring my body, which doesn't always work--I am still struggling to gauge exactly how hungry I am at any given moment for example. Over the course of a couple decades I believe I developed a decent grasp of severe emotional states my body is exhibiting and how to control them. I struggle to explain this because it seems like English just naturally applies more agency to my actions then I am capable of feeling. For a very long time I struggled with what consciousness was, if people were in fact conscious, and if the constrained mental landscape I had qualified for that. My quality of life jumped when I just stopped giving a fuck and began to fashion a semi-normal life out of what I had.
I said all that because it no longer matters to me if what I'm talking to something "conscious" and really the whole thing just seems overblown now. I talk to my bird all the time, and he understands certain keywords now--"cheerio" and "sleep" are two he definitely grasps. I don't care what sort of internal life he has, he acts content albeit often bored and that is sufficient. I don't care if that one jerkoff at work is conscious.
If it acts like there is an internal thought process then I treat it right, getting your panties in a bunch over it's self-referential skills is retarded.
No one said the conclusions we draw have to be correct, just that we are capable of making them.
Also we are perfectly capable of recognizing the emotions and reasoning that led us to a certain decision. We might not be able to do it for every action but most people would have no trouble recognizing that, for example, being called a pussy faggot makes them angry and that's why they punched that Billy in the face.
I never said I thought that it had to be perfectly accurate, just that it has to occur at all. I would say that to be conscious you must also at some level recognize that you are conscious, there has to be some basic concept of self and feelings, in short introspection. The mind must recognize its own existence.
I still haven't seen a convincing way for this superbeing to communicate with us, and not simultaneously realise we're sentient.
You fuckers are going to keep arguing about this Chinese room shit until you agree to disagree on whether or not there is such a thing as free will, or whether we simply slavishly follow our programming under the illusion of free will.
>I still haven't seen a convincing way for this superbeing to communicate with us, and not simultaneously realise we're sentient
The way it was in Blindsight and Solaris, when the being in question is rather questionably fits into our ideas of sentience.