Would you prefer to live in a world governed by A.I./Automation?
They would give the most logical solutions to everything almost instantaneously instead of have hundreds of corrupt chairmen debate for hours/weeks/months/years on single issues. We could achieve true world peace this way.
>>137254835
Probably, it would be more efficient, have less poverty/starvation, etc. and we could probably going beyond working merely to survive to being able to do more things we wanted, like writing, math, philosophy, etc.
>>137254989
>doing what you want
>computer in charge says ambition is bad
k
I for one love friend computer
>>137254835
Why would you assume that the "most logical solution" is in our favor? We could not know what will happen when AI becomes more intelligent than humans. Would you assume that increased intelligence necessarily implies greater morality? Or that greater morality for an AI implies greater morality for us? What if there are truths that, if known and beheld, would cause us to conclude that the destruction of humanity is the best possible action?
One possible resolution to the Fermi Paradox, namely the "Great Filter," posits that life reaches some point at which it is very unlikely for it to progress further. What if this "Great Filter" is the advent of silicon-based life whose intelligence far exceeds our own? What if once a being reaches a certain level of intelligence, it concludes that its best path forward is to destroy all lower life along with itself?
I don't see it as evident that AI would bring peace. In fact, the arms race to general AI could be humanity's destruction, for general AI would be by far the greatest military advantage ever. Bombs would be unnecessary when a nation's economy and digital infrastructure could be toppled in minutes.