Difference between revisions of "List of arguments against working on AI safety"

From Issawiki
Jump to: navigation, search
(Created page with "This is a '''list of arguments against working on AI safety'''. * Opportunity cost argument against AI safety: there is some more pressing problem for humanity (e.g. some...")
 
Line 3: Line 3:
 
* [[Opportunity cost argument against AI safety]]: there is some more pressing problem for humanity (e.g. some other x-risk like [[biorisk]]s) or maybe some other intervention like [[values spreading]] that is more cost effective.
 
* [[Opportunity cost argument against AI safety]]: there is some more pressing problem for humanity (e.g. some other x-risk like [[biorisk]]s) or maybe some other intervention like [[values spreading]] that is more cost effective.
 
* [[Short-term altruist argument against AI safety]]: focusing on long-term issues (e.g. ensuring the survival of humanity over the long term) turns out not to be important, or it turns out to be too difficult to figure out how to affect the long-term future.
 
* [[Short-term altruist argument against AI safety]]: focusing on long-term issues (e.g. ensuring the survival of humanity over the long term) turns out not to be important, or it turns out to be too difficult to figure out how to affect the long-term future.
 +
* [[Safety by default argument against AI safety]]: AI will be more or less aligned to human interests by default, possibly by analogy to things like bridges and airplanes (i.e. it's bad if bridges randomly fall down, so engineers work hard by default to ensure bridges are safe)
 
* [[AI will solve everything argument against AI safety]]
 
* [[AI will solve everything argument against AI safety]]
  
 
[[Category:AI safety]]
 
[[Category:AI safety]]

Revision as of 00:19, 7 November 2020

This is a list of arguments against working on AI safety.