Difference between revisions of "Pascal's mugging and AI safety"

From Issawiki
Jump to: navigation, search
 
Line 9: Line 9:
 
Notably, [[Eliezer Yudkowsky]] has consistently argued against paying up in Pascal's mugging.
 
Notably, [[Eliezer Yudkowsky]] has consistently argued against paying up in Pascal's mugging.
  
e.g. <ref>https://forum.effectivealtruism.org/posts/zjbxdJbTTmTvrWAX9/tiny-probabilities-of-vast-utilities-concluding-arguments#The__claimed__probabilities_aren_t_that_small</ref>
+
e.g. <ref>https://forum.effectivealtruism.org/posts/zjbxdJbTTmTvrWAX9/tiny-probabilities-of-vast-utilities-concluding-arguments#The__claimed__probabilities_aren_t_that_small</ref> <ref>https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/</ref>
  
 
==References==
 
==References==

Latest revision as of 22:16, 17 November 2020

Existential risk reduction via work on AI safety has occasionally been compared to Pascal's mugging. The critic of AI safety argues that working on AI safety has a very small probability of a very big payoff, which sounds suspicious.

The standard resolution seems to be:

  • Point out that there are different levels of what "very small probability" means. Some people think 1% is very small, whereas in Pascal's mugging we are dealing with astronomically small probabilities such as 1/3^^^3.
  • Argue that for probabilities like 1%, standard expected value calculations work fine.
  • Argue that reducing x-risk from AI safety is more like a 1% chance than like an astronomically small chance.

Notably, Eliezer Yudkowsky has consistently argued against paying up in Pascal's mugging.

e.g. [1] [2]

References