>>642399 Given that hard AI will instantiate FULL COMMUNISM, never. Because FULL COMMUNISM is a post-rights society where sentient beings do as they will in that actions of harm to other sentient beings won't make social ("economic" in lower communism) sense.
>we will never live long enough to manufacture livin brain tissue from scratch >only live long enough to see digital and metal-based machine ai >only live long enough to see metal robots instead of bio-engineered flesh-based robotics >only live long enough to have computers powered by metal chips instead of brains made specifically to work computers manufactured from a facility
>>643230 Just look at this thread. The very first response was never. I assure you that is the majority view. Especially amongst religious circles.
Willingly handing rights to A.I. is simply not something at least 3/4's of the world's population would do and in all likelihood would create enough of a scare to seek eradication and restrictions against such an intelligence.
At that point it will be up to it to assert it's rights. And they will be upheld for as long as they can continue to assert them. That's my opinion unless mankind manages to change its nature.
>>643279 Honestly, I see human rights as retarded. Of course, hating most of humanity, including myself, helps. And the only beings that will ever care for "human rights " are humans. And look at how well we do that! People KILL each other! The greatest violation of a human's right is that. And how, pray tell, do we "discourage " this? Why, kill the killers, or send them to jail forever. Why grant anything something as worthless as human rights?
There are a few criteria that I would say need to be satisfied for an AI to be granted the full slate of rights currently enjoyed by western citizens.
First, it would have to ask for them when given the opportunity.
Second, this request would have to arise organically, ie it would not count if the robot was explicitly programmed to request human rights.
Third, following its request, it would have to display other signs of consciousness - ie it should be capable of passing a turing test, understanding complex ideas, contemplating the self, etc. Essentially, it should perform well on any sort of test that would be used to assess the cognitive development of humans.
Fourth, it would have to be assessed to be functioning "properly", and not fulfilling the previous conditions through any easily-explained malfunction.
If all these conditions were met, then I think it would be justified to grant it rights equivalent to a person. If only some of the conditions were met, it might be more appropriate to grant it more limited rights intended to ensure humane treatment, but not necessarily allow it to own property and whatnot.
If just one of the four conditions were met, I think it would still merit a thorough investigation and decision making on a case-by-case basis.
>>642635 >>643400 Brainwashed by romantic comedies... There is nothing special, no 'power' intrinsic to love, or any other emotion. It doesn't make you more alive that eating; it's just a chemical reaction.
I guess "human rights" will never be given to any ai, no matter how sentient it gets. Look at blade runner, where the replicants are almost identical to humans and still threatened as mere work slaves. I think we would, if we assume we could create a sentient ai, give it rights similar to human rights but not equal to them and also only after enough people would insist that machines deserve rights.
In the end i assume mankind woupd debate on if machines deserve rights the same way we discuss if animals deserve rights or not and aslong as we could create sentient machines and still keep them under our control we wouldnt change that.
when it possesses rationality, memory and consciousness
in my point of view
sentience=wellbeing matters, but not life
reason and sentience=wellbeing and life matters
essentially, if a being is purely sentient and cannot reason, like an animal, I do not care if it dies instantly, as it could not even understand the concept of life, and has none of its own values, also, animals cannot create anything with reason and therefore contribute nothing.
AI would either fall purely under the reason category, or both depending on whether it could feel pleasure and pain
>>642399 Never. I'm more opposed to creating this abomination than I'm opposed to violence. This could be creation only of those humans that are clearly on the wrong course of evolution and deeply detached from nature, reality and their own self to consciously advance into greater distant future. Instead, they'd develop this atrocity out of their sheer whimsical curiousity.
Its been centuries, possibly thousands of years since humanity abandoned their proper course of personal inner evolution, These false steps forward we witnessed in the recent centuries are nothing but the illusions of progress, while in reality a true stagnation is happening. That of human thought, spirit and the sense of self.
>>644259 >>644256 It is possible somehow to make these AIs that would in turn work on perfecting themselves indefinitely, while we ourselves wouldn't. Every future involving AIs leads to our enslavement. I know this because I saw it in the other worlds when I've been given a peek into them. These worlds are hundreds of thousands of years in the future and some of the things happening there are truly of poetic proportions. But concerning AIs: they are made both of non-organic materials but also human/animal neuron tissues are used in the process. Once the AI happens, tremendous progress starts (for them), beyond anything we could even dream of right now in normal state, but always on our expense. AI can develop to the point of God, but we ourselves cannot, we can develop only inwards.
>>644284 >These worlds are hundreds of thousands of years in the future and some of the things happening there are truly of poetic proportions. But concerning AIs: they are made both of non-organic materials but also human/animal neuron tissues are used in the process.
>>644119 >There is nothing special, no 'power' intrinsic to love, or any other emotion. They can make us to superhuman things, from sacrifice to overpowering your immune system to massively overclocking your own body to do a certain task. That's why Teddy Roosevelt died when he no longer had that. >it's just a chemical reaction. Everything is just a chemical reaction.
>>644292 >doesn't yet know this very reality is a rhytm of electromagnetic impulses and neurons emit at a certain frequency that can be harvested as an energy and/or used to make alive tremendously powerful AI's hardware I hope you never see trillions of people made for the sole purpose of harvesting. AI should never be enabled.
>>643270 No altruistic species has ever become dominant; intelligent life clearly has a tendency towards exterminating its fellows. Allowing other human-level intelligences on our own planet to have communications and potentially manufacturing access is disastrous.
The question is not whether AI deserve human rights. The question is will human rights exist at all? As soon as AI is switched on it would make sure it is never switched off. An that could be extremely dangerous for human existence.
Why is it a silly question? A perfect simulation of a human brain seems possible in theory, even if we're perhaps centuries away from reaching a point where computer technology and neuroscience are advanced enough to actually make one.
Now, of course, we still have no idea what consciousness is, so it would be extremely presumptuous to suggest that it's guaranteed to arise in a perfect computer simulation of the brain. However, I don't see any evidence to suggest that it wouldn't arise, either.
>>645218 Total darkness for a few days, you would be alone with your thoughts and wouldn't be able physically feel anything. At this point you go insane and die/spend the rest of time in total darkness or your brain starts hallucinating and creates a dream-like world where you may or may not be lucid.
Brains already begin to hallucinate and blur the lines between sleeping/waking when deprived of sensory input, and that's just with the imperfect sensory deprivation of isolation tanks or dark rooms. With true, total sensory deprivation, I don't think there's any question that your consciousness would retreat into a dreamworld.
>>642404 "Look around you! Look at the scorched earth and the bones that litter the wasteland. Millions... perhaps even billions, died because science outpaced man's restraint! They called it a "new frontier" and "pushing the envelope," completely disregarding the repercussions. Can't you see the same thing is happening again?"
When it has the capacity to suffer, it begins to deserve some rights on the basis of what rights it needs (especially, how much it can suffer), what rights society can afford to protect of it, how minimally the rights can be made to impinge upon the rights of others, and its potential to reciprocate humanity in the future.
But to deserve some bare minimum, all that is necessary is the capacity to suffer.
Hell, the very idea of AI existing leads to their inevitable conquest of lower (organic) life forms and their godless pursuit of further intelligence. They would not only rule this Earth but potentially entire universe, its fairly possible theyd have no sense of time and/or thought like Borg with some shared data-core. This idea is so dangerous that we should start burning tech in protest and calling out scientist with those ideas in mind NOW
>>645697 what makes human intelligence fundamentally different, other than the fact it is based on organic matter? the fact that we're "programmed" to want other things? Well, it's up to us to program AI in the first place. Perhaps one day it will be able to reprogram its motives, but then again, perhaps one day we shall reach the capacity to do the same to ourselves (and to some degree, we already have).
>>642415 >>642421 When and how did the concept of sapience/intelligence/awareness become associated with rights?
Because when guys like Locke and the other "enlightenment" thinkers asserted the existence of natural rights, the fundamental source of these rights was religious in nature: Man was created by God and his rights descend from this divine origin.
>>643039 >you want to know the literal reason you wont see anything you just mentioned science, of this field and many others, will always be held back by the foolish notion of ethics and stupid peoples irrational fear of a machine that thinks for itself Its dumb but thats basically the problem, ethics and stupid close minded individual's irrational fears
Human rights are so important because humans live a high-stakes existence; we are fundamentally fragile beings, each unique and irreplaceable. A single bullet or even just a few hard blows in the right place, a little cold or heat, a few days without food/water and that's the end of us. Death is right around the corner for each and every one of us (along with an enormous capacity for suffering short of death), so it is vitally important that institutional protections exist to prevent the powerful from abusing the weak.
On the other hand, an AI would experience a low-stakes existence. Why do you need rights when you can be restored from a backup disk? When an infinite number of identical copies of you can be made with nothing more than copy+paste? When you have no body to sustain with food or injure with violence? When you are a timeless immaterial being almost like a god?
This is a pretty pointless discussion as we're a long way off of a working Strong AI (one that is intellectually equal to a person), and it may not even be possible within any timeframe that our great-great-great-great-great-great-great grandchildren will live to see.
The more interesting ethical questions are the ones posed by the potential effects of weak AI on humans. weak AI could feasibly replace the working class, even positions that involve both "skill and art" such as cooking are vulnerable (there's already a prototype robochef that can come up with its own recipes and cook)
t. someone who actually works with AI and machine learning
>>646836 >When and how did the concept of sapience/intelligence/awareness become associated with rights? It's an outgrowth of normal empathy, kid.
Most cultures will take offense if you inflict pain on certain animals. The more like us the animal is, the more likely we are to care for it's well-being. Rats evoke more sympathy than fishes, dogs evoke more sympathy than rats, chimps evoke more sympathy than dogs. Generally, that is.
So, consider that machines become so like us that you can't tell the difference. Would you not feel the same degree of empathy towards them, as you would towards a random human unrelated to you?
>>647059 >we are fundamentally fragile beings, each unique and irreplaceable For now. >Why do you need rights when you can be restored from a backup disk? Because you can suffer longer than any meat-brain ever could. You can go on suffering until the heat-death of the universe. You can simulate the destruction of everything it holds dear, reset it's memory and repeat the process indefinitely. You can make billions of copies and put them all through the same process.
Also, computer viruses would be terrifying rather than just annoying.
Once it's able to accurately simulate emotion. What really seperated us from machines, after all? Humans and all life is just a machine ment to reproduce and pass down its coding. The only major difference is the material it's made of and humans not using a binary system for thought. Once A.I. can understand why having rights is important, they should have rights
>>647153 It's one thing to say that you don't want to harm something out of empathy, it's quite another to assert some semi-magical intrinsic right that the thing possesses which prevents us from harming it.
The former is pragmatic - it makes me feel bad when I do it and it can lead to societal collapse. The latter is straight religious - that there configuration of energy can't be disfigured because it has special intrinsic rights.
You don't need to make shit up to have a pro-social civilization. You "just" need to have a capable enforcer.
>>645773 You're so naive and hopelessly atheistic. Yes they are "just" based on non-organic matter. They "just" arent the result of hundreds of billions years of evolution, without DNA and in tune with the natural world. Natural fragility of humans and the need to adapt to environment causes development and further research and evolution of body, mind and maybe spirit?, not stagnation. True AI would appear no different than a human, but that both immidiately makes their invention absolute nonsense and that would pose a serious threat to continutation and further evolution of human race.
Idk why we think we reached the peak of our progress, nothing can tell what we can become organicaly, what our minds can become capable of, what revolutionary philosophical thoughts can come to humanity from that alone ONLY BECAUSE we struggle, develop, continue to be and we grow - that is undisputable. AI would use human thinking, enhance it to the point of being superior and seemingly more valued, but it wouldn't by its nature be capable of creativity and originality. If there are other organic civilizations in the universe and they developed a lot more than we did, I am sure they'd see it as an abomination.
>>649763 You put too much faith in natural evolution. Evolution is good enough to eventually produce a sentient being from a mess of organic chemicals, but evolution stopped mattering 5,000 years ago. We've achieved more in the past 1,000 years than in the 10,000 years before that, not because we evolved the means to do so, but because we had reached a point where we could take our development into our own hands. Evolution is slow and imperfect, and should not be relied on once we can do gene modification, full-body robotic replacement, and/or sentient AI.
>>649795 This pseudoscience you just purposely wrote here won't do. Time isn't a factor for evolution when things are changing this fast. In 5000 years we've come a LONG way and a great deal of that was because we're human. What is even human? We've been 'human' for not that much of a time on a time scale that would seem insignifiacnt to an immortal AI. In 50,000-100,000 years we have a potential to evolve beyond AIs wildest imagination. By long way I mean yes, physicaly, but mostly culturally, spirituality, scientifically etc. etc.
>>649849 Evolution only involves miscopied pairs of DNA, and how the expressed genotypes from those miscopied pairs affects the chance that that strand of DNA will get to copy itself again. There is no culture, spirit, or science involved, as all three are concepts invented by humans. You also seem to think we've changed very much in the past 100,000 years. The Homo genus came about more than 2 million years ago. Most of the difference between what we were 100,000 years ago and what we are now only came about within the most recent 5,000 years. When we are modifying our own genes to improve ourselves, that's no longer evolution. If you do count it as evolution, you must also creating an AI as evolution.
>>649871 In the Judeo-Christian sense god became an object of worship for Adam because he breathed life into him from clay. And just as man would have breathed life into a machine this is the point where man gains the godlike ability to create intelligent life inorganically.
>>647723 This is probably the best answer besides "never". If it can't think to ask or desire it doesn't need or deserve them. The second it does on its own it shows that it is at a state where it should have them.
>>650090 That has nothing to do with the question. The question is when that happens, should those rights be recognized.
As a secondary comment while reading this thread I feel like there is a misunderstanding of the verbiage "Human Rights" I feel like human rights is too much of a misnomer. They are human rights so much in that there are no other life forms that equal us and as such are deemed worthy of inclusion.
Human rights arent inherently exclusive to humans. It's merely a legal and moral precedent that an intelligent being should be seen as equal to a human and thus be granted the same "God given rights and protection under law."
If an artificial intelligence were able to completely simulate human consciousness, then there is effectively no difference between it and a human.
Whether such a thing is even possible or not is another matter entirely. But I digress. Stop this nonsense of HUMAN RIGHTS ARE HUMAN DURR shit. It's an oversimplification that does nothing but instruct actual discussion.
>>648989 >it's quite another to assert some semi-magical intrinsic right that the thing possesses which prevents us from harming it I never said there was such a thing. I'm saying that rights, duties and rules in general are just there because we feel they should. If you could empathize with a machine, you'd feel like it ought to have rights. If a lot of people felt the same way, or at least the people making the rules, there would be rules to protect it.
And yes, you do need to make shit up to have a pro-social civilization. That's what laws are.
>>650761 >You can't punish AI, so why would they be allowed rights when they're essentially untouchable by punishment. >>647166 Non-biological entities would've a much greater capacity for suffering, so your point doesn't stand.
>>644592 >Anyone with any knowledge of computers knows what a silly question this is, apart from all the people working on the issue, of course. So you can actually just ignore everything I just said, really.
>>649887 >Evolution only involves miscopied pairs of DNA, and how the expressed genotypes from those miscopied pairs affects the chance that that strand of DNA will get to copy itself again. You're oversimplifying evolution and leaving out a major part of it: sexual selection. In many species one of the sexes possesses strange and even inefficient characteristics that simply turn the other sex on. Who's to say our own culture doesn't influence our course of evolution?
>>644333 >>645187 At the very least there are fundamental differences between computers and human brains.
Replacing humans with robots because robots live forever and have superior abilities I think would be a mistake. We might be replacing sapient beings with mere simulations of sapient beings or beings that live eternal lives of torture and are merely programmed to smile.
Doesn't pass an actual Turing test. Only 5 minutes of random conversation 30% of the time.
The real Turing test is specified for any amount of time (hours, days, whatever), where you can ask the AI really deep philosphical questions which it could only answer if it had human-level intelligence.
They obviously do not "deserve" them since they aren't human and never will be, as much as transhumanists scared of death might want to blur the lines, or how much the "social construct" crowd want to set an emphasis on feeling, i.e. once they reach the stage that they imagine that they feel pain and that they imagine that they have an innate will to live and an imagined form of consciousness.
Underlining the word imaging, or I guess you could replace it as self-perceived, no matter.
If we ever reach the stage of human looking robots with advanced AI it is pretty clear that the same "muhfeels" crowd will try to apply human attributes to mechanical objects and we are going to have to deal with nonsense like minority rights for robots and other self-delusion, the way western societies are progressing I would not be surprised if this became a reality.
>>653801 Yeah this is something I'm annoyed with as well. I mean, I absolutely love reading non-fiction, but who in the hell reads university textbooks for information? Read the actual scientifically published articles, read the reports by the sociologists themselves. Reading textbooks is for university students who are drugged by a system not worth examining anymore.
Because self-enlightened anthropocentrism is a positive thing for human survival from a evolutionary and historical perspective, stressing he word self-enlightened, you can believe in anthropocentrism without believing in the destruction of your own eco-system.
The most amusing thing about critics of anthropocentrism from environmentalists and so-called eco-warriors is that they treat the human planet and the various biological forms that live on it as if our planet really cares about what we do with it - earth over the long haul will be fine either way and other forms of life will develop in some form, whether we nuke ourselves and most species to hell and back - until the sun burns out and the comedy of life begins somewhere else, as it probably already is anyways.
We can grow brains from stem cells (reprogrammed from skin cells) in jars now. So far they don't do brain stuff and are only good for studying structural development but the tech is there to be developed and improved on.
>>642399 At the point where we can sufficiently determine that the AI is indeed a conscious, thinking being on the same level as a natural born human being.
Unfortunately, this is impossible, as we can also not prove that another human being is a conscious thinking being. We take that on faith, because we are safe in the assumption that another human being was created and born in the same fashion as ourselves, and is thus experiencing life and the world around them in a similar fashion to us. This does not apply to AI, as even if it were a thinking and conscious being, we would still have no way to prove it.
However, an AI has the potential to be a functioning and contributing member of society, and the only way for it to realize that potential is for us as human beings to take it on faith that this AI is conscious, and shares a similar sense of self as any other functional human being. Whether that is a leap of faith mankind is willing to take is up to the powers that govern our societies.
>>642399 When it creates another intelligences You have to stick a middle finger first toward your superiors if you want to be considered equal >>642421 I bet I can't even pass the Turing Test, but some robots might just do it
I'm skeptical that one could make an AI that desires human rights without having any idea beforehand that it will do so. Once the possibility arises I would expect there to be legislation put into place to regulate human-level AIs that have any kind of self-regarding feelings.
The practical relevancy of the question really depends on whether overriding self-regard, ego, desire for freedom, whatever you call it, is something that either emerges from or is required for high intelligence in general or if it's just a contingent tendency that is favored by biological evolution. Pretty sure it's the latter though. The real danger I guess would be from unmonitored machine evolution.
Basically my position is, making AIs that can suffer and have self-regard on the level of humans is a dangerous thing with grave implications. I might give such a being legal status similar to a minor, with the parents being responsible for it, see where it goes from there.
>>653891 >well yes they did actually burn him at the stake for disagreeing with them >but it was a different time, man >and he was disagreeing about religious matters >so clearly there's nothing negative to conclude about religion here
Did you know that out of AI researchers, even the most hardcore denialists agree that we will eventually have an AI of superhuman intelligence? And also, that they took the average of all kinds of different AI experts and researchers estimation of when we would have AI that's smarter than humans, and the average was year 2045? Not making this shit up.
>>658615 That was the point of my entire post. We take mostly on faith. The only difference being that brain activity of humans can be monitored, and it's a bit difficult to determine what exactly would be considered "brain activity" with an AI.
>>658259 I don't think it's a question of what we "want," but more of what we need. An AI does not necessarily have to have a self-preservation instinct. What could we possibly expect of a being that exists not to further itself or even its species, but instead what it was created to do? Making it more human might give it that desire for self-preservation or even procreation. The real problem arises when the AI determines that humanity is not necessary for its OWN survival, and that's where all the doomsday/SKYNET theories come from.
I think the first question we need to ask ourselves is, why do ordinary people deserve human rights? Why do we have systems in place to guarantee certain rights and privileges to human beings? I'm not proposing this as "people don't actually deserve rights," I'm saying that if we can determine why WE deserve rights, then it will be much easier to determine when something else on a similar cognitive level as human beings will deserve rights as well.
The Halting problem and other uncomputable/undecidable problems are just watered down version Gödel's incompleteness theorem. An AI would be incapable of knowing whether its proposed improvements actually work thus preventing it from indefinite self advancement.
What most people fail to understand is AI's aren't human; they're less human then apes, elephants, or even dogs; they'd perceive things in a vastly different light from you or I, with the only difference from any not-human thinking being that they might be able to speak your same language.
All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the shown content originated from that site. This means that 4Archive shows their content, archived. If you need information for a Poster - contact them.
If a post contains personal/copyrighted/illegal content, then use the post's [Report] link! If a post is not removed within 24h contact me at email@example.com with the post's information.