Difference between revisions of "List of arguments against working on AI safety"

From Issawiki
Jump to: navigation, search
Line 1: Line 1:
 
This is a '''list of arguments against working on AI safety'''. Personally I think the only one that's not totally weak is opportunity cost (in the ''de dicto'' sense that it's plausible that a higher priority cause exists, not in the ''de re'' sense that I actually have in mind a concrete higher priority cause), and for that I plan to continue to read somewhat widely in search of better [[cause area]]s.
 
This is a '''list of arguments against working on AI safety'''. Personally I think the only one that's not totally weak is opportunity cost (in the ''de dicto'' sense that it's plausible that a higher priority cause exists, not in the ''de re'' sense that I actually have in mind a concrete higher priority cause), and for that I plan to continue to read somewhat widely in search of better [[cause area]]s.
  
* [[Opportunity cost argument against AI safety]]: there is some more pressing problem for humanity (e.g. some other x-risk like [[biorisk]]s; basically something that is even more likely to kill us or arriving even sooner) or maybe some other intervention like [[values spreading]] that is more cost effective. This could be true for several reasons: [[AI timelines]] are long so something else that's big is likely to happen before then, some other concrete risk looks more promising, or some sort of 'unknown unknowns' argument that there is some [[Cause X]] that is yet to be discovered.
+
* [[Opportunity cost argument against AI safety]]: there is some more pressing problem for humanity (e.g. some other x-risk like [[biorisk]]s; basically something that is even more likely to kill us or arriving even sooner) or maybe some other intervention like [[values spreading]] that is more cost effective. This could be true for several reasons: [[AI timelines]] are long so something else that's big is likely to happen before then, some other concrete risk looks more promising, or some sort of 'unknown unknowns' argument that there is some [[Cause X]] that is yet to be discovered. All of the other arguments also agree with the opportunity cost argument in a sense: if you believe AI safety is not a top priority, then you believe there is some other thing that is of higher priority. So in order for the opportunity cost argument to not collapse to one of the other arguments, it seems important to believe in the importance of AI safety to at least some extent.
 
** [[Crowded field argument against AI safety]]: there are already enough people working on it, or there is enough momentum in the field that I personally don't need to enter the field.
 
** [[Crowded field argument against AI safety]]: there are already enough people working on it, or there is enough momentum in the field that I personally don't need to enter the field.
 
* [[Short-term altruist argument against AI safety]]: focusing on long-term issues (e.g. ensuring the survival of humanity over the long term) turns out not to be important, or it turns out to be too difficult to figure out how to affect the long-term future. See also [[Pascal's mugging and AI safety]].
 
* [[Short-term altruist argument against AI safety]]: focusing on long-term issues (e.g. ensuring the survival of humanity over the long term) turns out not to be important, or it turns out to be too difficult to figure out how to affect the long-term future. See also [[Pascal's mugging and AI safety]].

Revision as of 19:20, 20 May 2021

This is a list of arguments against working on AI safety. Personally I think the only one that's not totally weak is opportunity cost (in the de dicto sense that it's plausible that a higher priority cause exists, not in the de re sense that I actually have in mind a concrete higher priority cause), and for that I plan to continue to read somewhat widely in search of better cause areas.

Buck lists a few more at https://eaforum.issarice.com/posts/53JxkvQ7RKAJ4nHc4/some-thoughts-on-deference-and-inside-view-models#Proofs_vs_proof_sketches but actually i don't think those are such good counter-arguments.

References

  • Roman V. Yampolskiy. "AI Risk Skepticism". 2021. -- This paper provides a taxonomy of reasons AI safety skeptics bring up. However, I don't really like the way the arguments are organized in this paper, and many of them are very similar (I think most of them fit under what I call safety by default argument against AI safety).