Difference between revisions of "List of arguments against working on AI safety"

From Issawiki
Jump to: navigation, search
Line 3: Line 3:
 
* [[Opportunity cost argument against AI safety]]: there is some more pressing problem for humanity (e.g. some other x-risk like [[biorisk]]s; basically something that is even more likely to kill us or arriving even sooner) or maybe some other intervention like [[values spreading]] that is more cost effective. This could be true for several reasons: [[AI timelines]] are long so something else that's big is likely to happen before then, some other concrete risk looks more promising, or some sort of 'unknown unknowns' argument that there is some [[Cause X]] that is yet to be discovered.
 
* [[Opportunity cost argument against AI safety]]: there is some more pressing problem for humanity (e.g. some other x-risk like [[biorisk]]s; basically something that is even more likely to kill us or arriving even sooner) or maybe some other intervention like [[values spreading]] that is more cost effective. This could be true for several reasons: [[AI timelines]] are long so something else that's big is likely to happen before then, some other concrete risk looks more promising, or some sort of 'unknown unknowns' argument that there is some [[Cause X]] that is yet to be discovered.
 
* [[Short-term altruist argument against AI safety]]: focusing on long-term issues (e.g. ensuring the survival of humanity over the long term) turns out not to be important, or it turns out to be too difficult to figure out how to affect the long-term future.
 
* [[Short-term altruist argument against AI safety]]: focusing on long-term issues (e.g. ensuring the survival of humanity over the long term) turns out not to be important, or it turns out to be too difficult to figure out how to affect the long-term future.
* [[Safety by default argument against AI safety]]: AI will be more or less aligned to human interests by default, possibly by analogy to things like bridges and airplanes (i.e. it's bad if bridges randomly fall down, so engineers work hard by default to ensure bridges are safe)
+
* [[Safety by default argument against AI safety]]: AI will be more or less aligned to human interests by default, possibly by analogy to things like bridges and airplanes (i.e. it's bad if bridges randomly fall down, so engineers work hard by default to ensure bridges are safe).
 
* [[Objective morality argument against AI safety]]: All sufficiently intelligent beings converge to some objective morality (either because [[moral realism]] is true, or due to [[acausal trade]] as discussed in "[[The Hour I First Believed]]"), so there is no need to worry about superintelligent AI going again human values.
 
* [[Objective morality argument against AI safety]]: All sufficiently intelligent beings converge to some objective morality (either because [[moral realism]] is true, or due to [[acausal trade]] as discussed in "[[The Hour I First Believed]]"), so there is no need to worry about superintelligent AI going again human values.
 
* [[Slow growth argument against AI safety]]: explosive growth (such as [[recursive self-improvement]] or [[em economy]]) are not possible, so there is no need to worry about the world changing rapidly once AGI arrives.
 
* [[Slow growth argument against AI safety]]: explosive growth (such as [[recursive self-improvement]] or [[em economy]]) are not possible, so there is no need to worry about the world changing rapidly once AGI arrives.
Line 9: Line 9:
 
* [[Pascal's mugging and AI safety]]: AI safety work is sketchy because it's hoping for a huge payoff that has very tiny probability, and this kind of reasoning doesn't seem to work well as demonstrated by the [[Pascal's mugging]] thought experiment.
 
* [[Pascal's mugging and AI safety]]: AI safety work is sketchy because it's hoping for a huge payoff that has very tiny probability, and this kind of reasoning doesn't seem to work well as demonstrated by the [[Pascal's mugging]] thought experiment.
 
* [[Unintended consequences of AI safety advocacy argument against AI safety]]
 
* [[Unintended consequences of AI safety advocacy argument against AI safety]]
 +
 +
[[Buck]] lists a few more at https://eaforum.issarice.com/posts/53JxkvQ7RKAJ4nHc4/some-thoughts-on-deference-and-inside-view-models#Proofs_vs_proof_sketches but actually i don't think those are such good counter-arguments.
  
 
[[Category:AI safety]]
 
[[Category:AI safety]]

Revision as of 23:51, 2 May 2021

This is a list of arguments against working on AI safety. Personally I think the only one that's not totally weak is opportunity cost, and for that I plan to continue to read somewhat widely in search of better cause areas.

Buck lists a few more at https://eaforum.issarice.com/posts/53JxkvQ7RKAJ4nHc4/some-thoughts-on-deference-and-inside-view-models#Proofs_vs_proof_sketches but actually i don't think those are such good counter-arguments.