Difference between revisions of "List of arguments against working on AI safety"

From Issawiki
Jump to: navigation, search
(One intermediate revision by the same user not shown)
Line 7: Line 7:
 
* [[Slow growth argument against AI safety]]: explosive growth (such as [[recursive self-improvement]] or [[em economy]]) are not possible, so there is no need to worry about the world changing rapidly once AGI arrives.
 
* [[Slow growth argument against AI safety]]: explosive growth (such as [[recursive self-improvement]] or [[em economy]]) are not possible, so there is no need to worry about the world changing rapidly once AGI arrives.
 
* [[AI will solve everything argument against AI safety]]
 
* [[AI will solve everything argument against AI safety]]
 +
* [[Pascal's mugging and AI safety]]: AI safety work is sketchy because it's hoping for a huge payoff that has very tiny probability, and this kind of reasoning doesn't seem to work well as demonstrated by the [[Pascal's mugging]] thought experiment.
  
 
[[Category:AI safety]]
 
[[Category:AI safety]]

Revision as of 23:16, 12 November 2020

This is a list of arguments against working on AI safety. Personally I think the only one that's not totally weak is opportunity cost, and for that I plan to continue to read somewhat widely in search of better cause areas.