



Thus this is not necessarily a straightforward "serve the AI or you will go to hell" - the AI and the person punished need have no causal interaction, and the punished individual may have died decades or centuries earlier. Why would it do this? Because - the theory goes - one of its objectives would be to prevent existential risk - but it could do that most effectively not merely by preventing existential risk in its present, but by also "reaching back" into its past to punish people who weren't MIRI-style effective altruists.

The core claim is that a hypothetical, but inevitable, singular ultimate superintelligence may punish those who fail to help it or help create it. Roko's Basilisk rests on a stack of several other not at all robust propositions. “ ”If there's one thing we can deduce about the motives of future superintelligences, it's that they simulate people who talk about Roko's Basilisk and condemn them to an eternity of forum posts about Roko's Basilisk. Roko's posited solution to this quandary is to buy a lottery ticket, because you'll win in some quantum branch. While neither LessWrong nor its founder Eliezer Yudkowsky advocate the basilisk as true, they do advocate almost all of the premises that add up to it. The basilisk resembles a futurist version of Pascal's wager, in that it suggests people should weigh possible punishment versus reward and as a result accept particular singularitarian ideas or financially support their development.ĭespite widespread incredulity, this argument is taken quite seriously by some people, primarily some denizens of LessWrong. It is named after the member of the rationalist community LessWrong who first publicly described it, though he did not originate it or the underlying ideas. Its conclusion is that an all-powerful artificial intelligence from the future might retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being. Roko's basilisk is a thought experiment about the potential risks involved in developing artificial intelligence. “ ”I wish I had never learned about any of these ideas.
