Difference between revisions of "List of arguments against working on AI safety"

From Issawiki
Jump to: navigation, search
Line 3: Line 3:
 
* [[Opportunity cost argument against AI safety]]: there is some more pressing problem for humanity (e.g. some other x-risk like [[biorisk]]s; basically something that is even more likely to kill us or arriving even sooner) or maybe some other intervention like [[values spreading]] that is more cost effective. This could be true for several reasons: [[AI timelines]] are long so something else that's big is likely to happen before then, some other concrete risk looks more promising, or some sort of 'unknown unknowns' argument that there is some [[Cause X]] that is yet to be discovered.
 
* [[Opportunity cost argument against AI safety]]: there is some more pressing problem for humanity (e.g. some other x-risk like [[biorisk]]s; basically something that is even more likely to kill us or arriving even sooner) or maybe some other intervention like [[values spreading]] that is more cost effective. This could be true for several reasons: [[AI timelines]] are long so something else that's big is likely to happen before then, some other concrete risk looks more promising, or some sort of 'unknown unknowns' argument that there is some [[Cause X]] that is yet to be discovered.
 
* [[Short-term altruist argument against AI safety]]: focusing on long-term issues (e.g. ensuring the survival of humanity over the long term) turns out not to be important, or it turns out to be too difficult to figure out how to affect the long-term future. See also [[Pascal's mugging and AI safety]].
 
* [[Short-term altruist argument against AI safety]]: focusing on long-term issues (e.g. ensuring the survival of humanity over the long term) turns out not to be important, or it turns out to be too difficult to figure out how to affect the long-term future. See also [[Pascal's mugging and AI safety]].
* [[Safety by default argument against AI safety]]: AI will be more or less aligned to human interests by default, possibly by analogy to things like bridges and airplanes (i.e. it's bad if bridges randomly fall down, so engineers work hard by default to ensure bridges are safe).
+
* [[Safety by default argument against AI safety]]: AI will be more or less aligned to human interests by default, possibly by analogy to things like bridges and airplanes (i.e. it's bad if bridges randomly fall down, so engineers work hard by default to ensure bridges are safe). A special case is [[AGI skepticism argument against AI safety]].
 +
** [[AGI skepticism argument against AI safety]]: It is impossible to create a human-level or smarter-than-human-level AI, so there is no problem to solve in the first place. This is a special case of [[safety by default argument against AI safety]].
 
* [[Doomer argument against AI safety]]: we are so screwed that it's not even worth working on AI safety. A variant is, there are various worldviews about AI safety, and in the more optimistic ones things will very likely go right or additional effort has no effect on existential probability so it's not worth working on it, and in the more pessimistic ones things are almost surely to fail so there is no point in working on it.
 
* [[Doomer argument against AI safety]]: we are so screwed that it's not even worth working on AI safety. A variant is, there are various worldviews about AI safety, and in the more optimistic ones things will very likely go right or additional effort has no effect on existential probability so it's not worth working on it, and in the more pessimistic ones things are almost surely to fail so there is no point in working on it.
 
* [[Objective morality argument against AI safety]]: All sufficiently intelligent beings converge to some objective morality (either because [[moral realism]] is true, or due to [[acausal trade]] as discussed in "[[The Hour I First Believed]]"), so there is no need to worry about superintelligent AI going again human values (or in other words, if the AI goes against human values, it is because humans are wrong to have those values so nothing is lost in a cosmic sense).
 
* [[Objective morality argument against AI safety]]: All sufficiently intelligent beings converge to some objective morality (either because [[moral realism]] is true, or due to [[acausal trade]] as discussed in "[[The Hour I First Believed]]"), so there is no need to worry about superintelligent AI going again human values (or in other words, if the AI goes against human values, it is because humans are wrong to have those values so nothing is lost in a cosmic sense).

Revision as of 18:35, 20 May 2021

This is a list of arguments against working on AI safety. Personally I think the only one that's not totally weak is opportunity cost, and for that I plan to continue to read somewhat widely in search of better cause areas.

Buck lists a few more at https://eaforum.issarice.com/posts/53JxkvQ7RKAJ4nHc4/some-thoughts-on-deference-and-inside-view-models#Proofs_vs_proof_sketches but actually i don't think those are such good counter-arguments.

more reasons listed here: https://arxiv.org/ftp/arxiv/papers/2105/2105.02704.pdf#page=6