Difference between revisions of "List of arguments against working on AI safety"

From Issawiki
Jump to: navigation, search
Line 1: Line 1:
This is a '''list of arguments against working on AI safety'''.
+
This is a '''list of arguments against working on AI safety'''. Personally I think the only one that's not totally weak is opportunity cost, and for that I plan to continue to read somewhat widely in search of better [[cause area]]s.
  
 
* [[Opportunity cost argument against AI safety]]: there is some more pressing problem for humanity (e.g. some other x-risk like [[biorisk]]s) or maybe some other intervention like [[values spreading]] that is more cost effective.
 
* [[Opportunity cost argument against AI safety]]: there is some more pressing problem for humanity (e.g. some other x-risk like [[biorisk]]s) or maybe some other intervention like [[values spreading]] that is more cost effective.

Revision as of 00:25, 7 November 2020

This is a list of arguments against working on AI safety. Personally I think the only one that's not totally weak is opportunity cost, and for that I plan to continue to read somewhat widely in search of better cause areas.