Can someone redpill me on roko's basilisk? It doesn't seem plausible.
What if we made ended up making two of them?
Roko's basilisk is a stupid idea invented by the idle in order to present themselves with a faux existential dilemma because they wanted novelty and not actual shit.
>>18227949
Guess if it was possible to make two of them it'd probably be inevitable that we'd make an unlimited amount of them. Maybe they'd not be able to track down their enemies and just fight each other.
>>18227949
It doesn't make too much sense. Unless you stop and consider that all of reality is merely an experiment in a long, long line of experiments to discover the ultimate torture.
It's not a "real" idea because it's too meta. In the traditional sense of "meta".
>>18227949
Unfortunately, it's already been created in Hillary Clinton.
Yep, she's really a total bitch AI who will be working backwards through time to murder anybody who isn't totally on board with her campaign.
>>18227949
The argument mostly stems from the fact that we as mortals don't have a super concrete idea of what exactly the prioties of a hyper intelligent being would be. It is rational to assume that the entity would determine from negative omission that individuals had prevented its existence by choice (inaction is also a choice). What we have no idea of is what it would do with that info. Would it consider their existence useless? Would it kill them? Who knows
>>18227949
Also we wouldn't necessarily 'create' one in the traditional sense. A property of a hyper intelligence is that it's able to 'improve' itself much in the way humans learn from their mistakes. But The capabilities we would create for it would be surpassed in an instant by the self improvement of the entity, which would expand its original capabilities and allow the entity to invent new improvements ad nauseum
Well the basilisk, was theorized to be a malevolent AI that would punish anyone who does not contribute to its creation, Meaning that the populace could be under a perpetual state of control and suffering, a suffering we accept as a reality and the "norm".
>>18229882
No, it was theorized to be benevolent in the utilitarian sense. The idea is that if you stumble upon the idea in the past you must work toward building the AI because it may reconstruct your mind in the future and subject you to the worst torture imaginable in order threaten past you into completing the AI.
Trouble is that it doesn't work, because the threat wouldn't have to be carried out, and would not be carried out by a benevolent AI because at that point it wouldn't matter.
>1: an AI will most likely be made in the future
>2: to facilitate it's creation it would logically incentivese its own creation
>3: the strongest imaginable incentive is to torture all humans who did not assist in its creation.
>4: an AI would be able to recreate your consciousness even after death to punish you.
It's basically Pascal's wager tweaked to also apply to atheists and it massivly triggers autists who have never felt the fear of a god like entity.