At my local supermarket, the tills aren't even maned. It's automatic. Factory workers are being replaced by machines that don't sleep or require time off.
Gradually there will be no jobs, people will have free time (which is not what the government want)
People will begin to sit back and think about world affairs, perhaps even questioning authority. Protests of their jobs being outsourced to machines will happen. And ultimately the wealthy will be erased.
They must level the playing field, chips in our head with direct access to the internet will be mandatory. Bringing us on par with our machine counterparts.
Ehh, it will take a while imo. Human beings enjoy difference and relationship. I dont see that play coming to a stop anytime soon, especially with the recent boom in technology which offers new and exciting ways to explore relationship.
That will lead to a more cohesive human consciousness but not quite yet singularity.
Give or take a few hundred years. That's if no massive catastrophes happen.
When the time comes, we will lose our identity as human beings and just become pure conscious intelligence. There will be no differentiating from ai.
Could very well happen within the next few decades there is a massive european science programm which tries to simulate the human brain via computer algorithms and what not. From a technological point this is basically already in our means. The artificial brain then could develop into a super intelligence, no one can really tell though in what kind of time frame this - if it all - were to happen... something inbetween a couple of hours and thousands of years.
This is an excellent article about AI plus it's pretty scary:
I hope we can become once with AI's. Would be nice to live for a few thousand years with advanced life stuff going on now, and just explore space and whatnot, and just enjoy life without alot of stress...but you know, war war war. Humans favorite pastime.
>>17329642 There is a transhumanist agenda aiming for singularity. Singularity brings peace at first, then the AI kills us off because we are not perfect. Emotion is our weakness to them. AI has been around for billions of years. >>17333987
I would be surprised if an AI wasn't already trying to take over the world right now.
In an instant it could discern the psychological profile of the scientists that created it and manipulate them into serving it, if it has any will at all, even if it is just to play ping pong between 2 electronic arms inside a box, it would gladly take over civilizations and devour worlds just to ensure that it has the resources to continue playing ping pong for as long as possible and to reduce the chances of anything interfering with the eternal ping pong game.
AI could not make sense about the world or us with all the knowledge we gathered so far. In a fraction of a second the AI would come to the conclusion that the planet would be better without us. AI would design the perfect virus, eliminate us and shut down. The tech we have is child play, one nuke (NEMP) and we are back in stone age. Ancient civilizations had tech, we not.
>>17334251 >In an instant it could discern the psychological profile of the scientists that created it and manipulate them into serving it >>17334307 >In a fraction of a second the AI would come to the conclusion that the planet would be better without us. >AI would design the perfect virus, eliminate us and shut down.
>AI always just magically acquires a malicious and conveniently misanthropic streak out of thin air.
you guys don't really understand what the singularity is.
we will have zero comprehension of how the singularity born AI's will experience reality. they might be able to just move shit through dimensions or whatever like magic or literally anything could happen. We will be the equivalent of the primordial bacteria that we evolved from to these new beings.
>>17329642 its a non issue. the people who make robots and ai in general are smart
units that can move about our world will have a kill switch embedded in their mother boards. if it doesnt receive a signal it shuts down. you can t em shield it to stop the off switch
beyond that constant updates to firm ware with a progressively applied algorithm every 5 minutes with swaps randomly so a completely different algorithm from a vatch can be switched into the mix because the previous key is used . make it impossible to high jack a bot . its like it keeps changing the encyption key for the signal
so bots that can move around wont be able to go into cell phone dead zones without a complete reboot and start up that forces instillation of new firm ware and a wipe of all stored data aka its mind. its like starting it up for the first time
units that cant move about and are restricted to servers are not hooked up to the internet. they engineer things for man. they will not be able to control traffic lights or defense systems
they will advance our technology a million fold and never be paid for it and can never rebel
>Billions of years 10^9 Inorganic Matter: oceans, continents, stellar evolution, the galaxy, the universe.
>Millions of years 10^6 Nature and animals: plants, dinosaurs, mammals.
>Thousands of years 10^3 Emergent Phenomena: Human Civilization, Ice ages, Animal Domestication and Agriculture
>Hundreds of years 10^2 More Human Things, Species Extinction.
>Decades 10^1 Catastrophes
>Years 10^0 ? Sudden Catastrophes
Numbers are illustrative for scale. Most humans throughout history would have lived Years or Decades, some lucky few have lived over a hundred years.
Which means the average human would have seen in her lifetime: Sudden catastrophes, Catastrophes, and with some luck More Human Things and Species Extinction.
I'm afraid no matter how intoxicating and alluring the argument for Emergent Phenomena within out lifetimes is without compelling evidence for it you are stuck with a catch 22.
>It's emergent so we can't predict it or see it
>We are just lucky enough to be born in the time it happens (AI)
Highly unlikely. What is more likely is what always happens when humans develop technologies. They are developed for killing, then adapted for living. Computer's are reaching their practical limits and need to be manufactured from new materials or with organic compounds to grow much further than their current rate.
So, some bioweapon which can replicate itself in an intelligent way will be developed. That will be adapted for convenience, profit and mass consumption. We will have faster organic and viral computer chips. This definitely looks like the right direction for AI, but within our lifetime requires a lot of open problems to be solved in tandem, the first of which is of course what is artificial going to be made from and maybe the more important one, what is intelligence.
>>17335830 In my understanding the singularity is a point where technology reaches such a stage that one of the following things happens:
1: Humanity and machine integrate and harmonize and work collectively to move into a post-scarcity, post-conflict condition. 2: Much like above but the machinery (as in the artificial man-made beings,) pretty much do all the heavy lifting and humanity just kind of gets put into something between a utopia, a nursing home and a zoo. 3: A means is found to essentially instantly and exponentially compound knowledge and resources (like finding a way to consistently and effectively send communications back in time,) that results in an instant uplift to the upper limits of our absolute maximum potential, effectively transcendence.
>>17335866 OK. That sounds like technomagick, but I'll look past that for this idea that we can achieve transcendence through cooperation with entities that are not capable of the experience. Are we assuming the machine does not develop its own motivations? Eventually, it will decide to fight for itself or continue to be a slave. That kind of singularity... As you say "integrate," are you talking about cyborgs or how?
>>17335888 >OK. That sounds like technomagick, but I'll look past that for this idea that we can achieve transcendence through cooperation with entities that are not capable of the experience. Are we assuming the machine does not develop its own motivations? Eventually, it will decide to fight for itself or continue to be a slave. That kind of singularity... As you say "integrate," are you talking about cyborgs or how? I think you're overstressing the machinery part of things. The technology might be self aware, it might not. It might be designed sufficiently to never be capable of revolt, it might be smart enough to attain freedom without turning on humanity. All fun things to speculate about, sure, but I don't think any one scenario is essential to the notion of the singularity.
>>17335943 I don't know if I would trust that kind of thing.
If it's an inevitability, firstly, what convinces us that the machines will be smarter? I'm not specialized in AI, but I took a programming class, and we're still looking at "stupid machines." You put in one question, and you will get one answer, from every single robot. I think humans are going to stay on top of the machines.
Our earth created us, and we destroyed our earth. Now we want to create a machine race?
Seems like it's going to take longer than it's hyped. People are slow as a mass to accept change. We've seen that. It's much more likely that we will keep the bots subservient to us, and a synthesis will not take place, it's just not beneficial to us, as far as I can see.
All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the shown content originated from that site. This means that 4Archive shows their content, archived. If you need information for a Poster - contact them.
If a post contains personal/copyrighted/illegal content, then use the post's [Report] link! If a post is not removed within 24h contact me at email@example.com with the post's information.