Difference between revisions of "List of arguments against working on AI safety"

From Issawiki
Jump to: navigation, search
Line 7: Line 7:
 
* [[Doomer argument against AI safety]]: we are so screwed that it's not even worth working on AI safety. A variant combines this with [[safety by default argument against AI safety]], saying there are various worldviews about AI safety, and in the more optimistic ones things will very likely go right or additional effort has no effect on existential probability so it's not worth working on it, and in the more pessimistic ones things are almost surely to fail so there is no point in working on it.
 
* [[Doomer argument against AI safety]]: we are so screwed that it's not even worth working on AI safety. A variant combines this with [[safety by default argument against AI safety]], saying there are various worldviews about AI safety, and in the more optimistic ones things will very likely go right or additional effort has no effect on existential probability so it's not worth working on it, and in the more pessimistic ones things are almost surely to fail so there is no point in working on it.
 
* [[Objective morality argument against AI safety]]: All sufficiently intelligent beings converge to some objective morality (either because [[moral realism]] is true, or due to [[acausal trade]] as discussed in "[[The Hour I First Believed]]"), so there is no need to worry about superintelligent AI going again human values (or in other words, if the AI goes against human values, it is because humans are wrong to have those values so nothing is lost in a cosmic sense). In other words, this argument explicitly denies the [[orthogonality thesis]].
 
* [[Objective morality argument against AI safety]]: All sufficiently intelligent beings converge to some objective morality (either because [[moral realism]] is true, or due to [[acausal trade]] as discussed in "[[The Hour I First Believed]]"), so there is no need to worry about superintelligent AI going again human values (or in other words, if the AI goes against human values, it is because humans are wrong to have those values so nothing is lost in a cosmic sense). In other words, this argument explicitly denies the [[orthogonality thesis]].
* [[Slow growth argument against AI safety]]: explosive growth (such as [[recursive self-improvement]] or [[em economy]]) are not possible, so there is no need to worry about the world changing rapidly once AGI arrives.
+
* [[Perpetual slow growth argument against AI safety]]: explosive growth (such as [[recursive self-improvement]] or [[em economy]]) are not possible, so there is no need to worry about the world changing rapidly once AGI arrives.
 
* [[AI will solve everything argument against AI safety]]
 
* [[AI will solve everything argument against AI safety]]
 
* [[Pascal's mugging and AI safety]]: AI safety work is sketchy because it's hoping for a huge payoff that has very tiny probability, and this kind of reasoning doesn't seem to work well as demonstrated by the [[Pascal's mugging]] thought experiment. Related to the [[short-term altruist argument against AI safety]].
 
* [[Pascal's mugging and AI safety]]: AI safety work is sketchy because it's hoping for a huge payoff that has very tiny probability, and this kind of reasoning doesn't seem to work well as demonstrated by the [[Pascal's mugging]] thought experiment. Related to the [[short-term altruist argument against AI safety]].

Revision as of 03:56, 23 November 2021

This is a list of arguments against working on AI safety. Personally I think the only one that's not totally weak is opportunity cost (in the de dicto sense that it's plausible that a higher priority cause exists, not in the de re sense that I actually have in mind a concrete higher priority cause), and for that I plan to continue to read somewhat widely in search of better cause areas.

Buck lists a few more at https://eaforum.issarice.com/posts/53JxkvQ7RKAJ4nHc4/some-thoughts-on-deference-and-inside-view-models#Proofs_vs_proof_sketches but actually i don't think those are such good counter-arguments.

References

  • Roman V. Yampolskiy. "AI Risk Skepticism". 2021. -- This paper provides a taxonomy of reasons AI safety skeptics bring up. However, I don't really like the way the arguments are organized in this paper, and many of them are very similar (I think most of them fit under what I call safety by default argument against AI safety).
  1. https://futureoflife.org/2020/06/15/steven-pinker-and-stuart-russell-on-the-foundations-benefits-and-possible-existential-risk-of-ai/ "Namely, we can’t take into account the fantastically chaotic and unpredictable reactions of humans. And we can’t program a system that has complete knowledge of the physical universe without allowing it to do experiments and acquire empirical knowledge, at a rate determined by the physical world. Exactly the infirmities that prevent us from exploring the entire space of behavior of one of these systems in advance is the reason that it’s not going to be superintelligent in the way that these scenarios outline."