"A thought experiment called 'Roko's Basilisk' takes the notion of world-ending artificial intelligence to a new extreme, suggesting that all-powerful robots may one day torture those who didn't help them come into existence sooner." Business Insider
"Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. The experiment's premise is that an all-powerful artificial intelligence from the future could retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. It resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development." Rational Wiki
What is the Roko's basilisk? It's an idea that if somehow we were to create a smarter-than-humans AI, which will in turn create an even smarter AI, and so on until it becomes close to omniscient and godlike, that it will create a simulations of all of the previous humans that didn't take part in advancing it, and then punishing them.
Why would it punish them in a simulation? Arguably, said AI would be the best thing ever, as it will solve humanity's problems if it was created to be friendly. However, since humanity has suffered before it was born, so to speak, those humans that didn't advance the AI will be punished because reasons. Those reasons include existential risk or some other.
One put it that 151,600 people die each day before said AI manifested, so the AI has a reason to exact revenge on those that didn't help it come back to save humanity.
To be fair, not all transhumanists hold these belief, as it is just a thought experiment. You'd have to accept the premise that AI would find existential crisis compelling enough to punish simulations of you far in the future, in addition to the already faulty transhumanist assumptions.
If one does accept the premise, these are the problems with that line of thinking.
That is,
1. Atheism says there is no God
1a. If there was a God, He is evil for reasons, including the following: sending people to hell for not following Him, the existence of evil, people suffering, etc.
1b. God sees Himself as the ultimate good
2. We are most likely simulations by an AI
3. If we are simulations, then AI is basically a god in all practices and purposes
3a. Thus if said AI runs this simulated universe, it is the god of this universe
Thus if Roko's Basilisk is true,
A. There is a god
B. That god is evil for sending the current simulations of who don't to an existential hell, allowing evil to exist in the simulated universe, allowing suffering of simulations, etc.
C. That AI considers itself the ultimate good, and with objective measure
D. We are already simulations of said AI god
E. Atheism is false, at least in this simulated universe
So if Roko's Basilisk is possible, atheism is false. Do note I don't believe in Roko's Basilisk or atheism, I used their reasoning.
No comments:
Post a Comment