Difference between revisions of "List of arguments against working on AI safety"

From Issawiki
Jump to: navigation, search
Line 14: Line 14:
 
[[Buck]] lists a few more at https://eaforum.issarice.com/posts/53JxkvQ7RKAJ4nHc4/some-thoughts-on-deference-and-inside-view-models#Proofs_vs_proof_sketches but actually i don't think those are such good counter-arguments.
 
[[Buck]] lists a few more at https://eaforum.issarice.com/posts/53JxkvQ7RKAJ4nHc4/some-thoughts-on-deference-and-inside-view-models#Proofs_vs_proof_sketches but actually i don't think those are such good counter-arguments.
  
more reasons listed here: https://arxiv.org/ftp/arxiv/papers/2105/2105.02704.pdf#page=6
+
==References==
 +
 
 +
* Roman V. Yampolskiy. [https://arxiv.org/ftp/arxiv/papers/2105/2105.02704.pdf#page=6 "AI Risk Skepticism"]. 2021. -- This paper provides a taxonomy of reasons AI safety skeptics bring up. However, I don't really like the way the arguments are organized in this paper, and many of them are very similar (I think most of them fit under what I call [[safety by default argument against AI safety]]).
  
 
[[Category:AI safety]]
 
[[Category:AI safety]]

Revision as of 19:01, 20 May 2021

This is a list of arguments against working on AI safety. Personally I think the only one that's not totally weak is opportunity cost (in the de dicto sense that it's plausible that a higher priority cause exists, not in the de re sense that I actually have in mind a concrete higher priority cause), and for that I plan to continue to read somewhat widely in search of better cause areas.

Buck lists a few more at https://eaforum.issarice.com/posts/53JxkvQ7RKAJ4nHc4/some-thoughts-on-deference-and-inside-view-models#Proofs_vs_proof_sketches but actually i don't think those are such good counter-arguments.

References

  • Roman V. Yampolskiy. "AI Risk Skepticism". 2021. -- This paper provides a taxonomy of reasons AI safety skeptics bring up. However, I don't really like the way the arguments are organized in this paper, and many of them are very similar (I think most of them fit under what I call safety by default argument against AI safety).