Difference between revisions of "List of arguments against working on AI safety"

From Issawiki
Jump to: navigation, search
Line 1: Line 1:
 
This is a '''list of arguments against working on AI safety'''. Personally I think the only one that's not totally weak is opportunity cost (in the ''de dicto'' sense that it's plausible that a higher priority cause exists, not in the ''de re'' sense that I actually have in mind a concrete higher priority cause), and for that I plan to continue to read somewhat widely in search of better [[cause area]]s.
 
This is a '''list of arguments against working on AI safety'''. Personally I think the only one that's not totally weak is opportunity cost (in the ''de dicto'' sense that it's plausible that a higher priority cause exists, not in the ''de re'' sense that I actually have in mind a concrete higher priority cause), and for that I plan to continue to read somewhat widely in search of better [[cause area]]s.
  
* [[Opportunity cost argument against AI safety]]: there is some more pressing problem for humanity (e.g. some other x-risk like [[biorisk]]s; basically something that is even more likely to kill us or arriving even sooner) or maybe some other intervention like [[values spreading]] that is more cost effective. This could be true for several reasons: [[AI timelines]] are long so something else that's big is likely to happen before then, some other concrete risk looks more promising, or some sort of 'unknown unknowns' argument that there is some [[Cause X]] that is yet to be discovered. All of the other arguments also agree with the opportunity cost argument in a sense: if you believe AI safety is not a top priority, then you believe there is some other thing that is of higher priority. So in order for the opportunity cost argument to not collapse to one of the other arguments, it seems important to believe in the importance of AI safety to at least some extent.
 
** [[Crowded field argument against AI safety]]: there are already enough people working on it, or there is enough momentum in the field that I personally don't need to enter the field.
 
 
* [[Short-term altruist argument against AI safety]]: focusing on long-term issues (e.g. ensuring the survival of humanity over the long term) turns out not to be important, or it turns out to be too difficult to figure out how to affect the long-term future. See also [[Pascal's mugging and AI safety]].
 
* [[Short-term altruist argument against AI safety]]: focusing on long-term issues (e.g. ensuring the survival of humanity over the long term) turns out not to be important, or it turns out to be too difficult to figure out how to affect the long-term future. See also [[Pascal's mugging and AI safety]].
 
* [[Safety by default argument against AI safety]]: AI will be more or less aligned to human interests by default, possibly by analogy to things like bridges and airplanes (i.e. it's bad if bridges randomly fall down, so engineers work hard by default to ensure bridges are safe), or because the alignment problem is actually very easy (e.g. [[instrumental convergence]] does not hold so AIs will not try to manipulate humans). A special case is [[AGI skepticism argument against AI safety]].
 
* [[Safety by default argument against AI safety]]: AI will be more or less aligned to human interests by default, possibly by analogy to things like bridges and airplanes (i.e. it's bad if bridges randomly fall down, so engineers work hard by default to ensure bridges are safe), or because the alignment problem is actually very easy (e.g. [[instrumental convergence]] does not hold so AIs will not try to manipulate humans). A special case is [[AGI skepticism argument against AI safety]].
Line 12: Line 10:
 
* [[Pascal's mugging and AI safety]]: AI safety work is sketchy because it's hoping for a huge payoff that has very tiny probability, and this kind of reasoning doesn't seem to work well as demonstrated by the [[Pascal's mugging]] thought experiment. Related to the [[short-term altruist argument against AI safety]].
 
* [[Pascal's mugging and AI safety]]: AI safety work is sketchy because it's hoping for a huge payoff that has very tiny probability, and this kind of reasoning doesn't seem to work well as demonstrated by the [[Pascal's mugging]] thought experiment. Related to the [[short-term altruist argument against AI safety]].
 
* [[Unintended consequences of AI safety advocacy argument against AI safety]]: AI safety is important, but working on it now or advocating for people to work on it has bad effects like more people going into AI capabilities research or people thinking AI safety is full of crackpots.
 
* [[Unintended consequences of AI safety advocacy argument against AI safety]]: AI safety is important, but working on it now or advocating for people to work on it has bad effects like more people going into AI capabilities research or people thinking AI safety is full of crackpots.
 +
* [[Opportunity cost argument against AI safety]]: there is some more pressing problem for humanity (e.g. some other x-risk like [[biorisk]]s; basically something that is even more likely to kill us or arriving even sooner) or maybe some other intervention like [[values spreading]] that is more cost effective. This could be true for several reasons: [[AI timelines]] are long so something else that's big is likely to happen before then, some other concrete risk looks more promising, or some sort of 'unknown unknowns' argument that there is some [[Cause X]] that is yet to be discovered. All of the other arguments also agree with the opportunity cost argument in a sense: if you believe AI safety is not a top priority, then you believe there is some other thing that is of higher priority. So in order for the opportunity cost argument to not collapse to one of the other arguments, it seems important to believe in the importance of AI safety to at least some extent.
 +
** [[Crowded field argument against AI safety]]: there are already enough people working on it, or there is enough momentum in the field that I personally don't need to enter the field.
  
 
[[Buck]] lists a few more at https://eaforum.issarice.com/posts/53JxkvQ7RKAJ4nHc4/some-thoughts-on-deference-and-inside-view-models#Proofs_vs_proof_sketches but actually i don't think those are such good counter-arguments.
 
[[Buck]] lists a few more at https://eaforum.issarice.com/posts/53JxkvQ7RKAJ4nHc4/some-thoughts-on-deference-and-inside-view-models#Proofs_vs_proof_sketches but actually i don't think those are such good counter-arguments.

Revision as of 19:20, 20 May 2021

This is a list of arguments against working on AI safety. Personally I think the only one that's not totally weak is opportunity cost (in the de dicto sense that it's plausible that a higher priority cause exists, not in the de re sense that I actually have in mind a concrete higher priority cause), and for that I plan to continue to read somewhat widely in search of better cause areas.

Buck lists a few more at https://eaforum.issarice.com/posts/53JxkvQ7RKAJ4nHc4/some-thoughts-on-deference-and-inside-view-models#Proofs_vs_proof_sketches but actually i don't think those are such good counter-arguments.

References

  • Roman V. Yampolskiy. "AI Risk Skepticism". 2021. -- This paper provides a taxonomy of reasons AI safety skeptics bring up. However, I don't really like the way the arguments are organized in this paper, and many of them are very similar (I think most of them fit under what I call safety by default argument against AI safety).