Difference between revisions of "Emotional difficulties of AI safety research"

From Issawiki
Jump to: navigation, search
 
(2 intermediate revisions by the same user not shown)
Line 2: Line 2:
  
 
* [[Predicting the future is hard, predicting a future with futuristic technology is even harder]]
 
* [[Predicting the future is hard, predicting a future with futuristic technology is even harder]]
 +
* [[AI safety has many prerequisites]]
 
* [[One cannot tinker with AGI safety because no AGI has been built yet]]
 
* [[One cannot tinker with AGI safety because no AGI has been built yet]]
 
* [[Thinking about death is painful]]
 
* [[Thinking about death is painful]]
 
* [[Abstract utilitarianish thinking can infect everyday life activities]] (see [[Anna Salamon]]'s post about this for more details)
 
* [[Abstract utilitarianish thinking can infect everyday life activities]] (see [[Anna Salamon]]'s post about this for more details)
* [[AI safety contains some memetic hazards]] (e.g. distant superintelligence stuff, malign universal prior, most famously [[Roko's basilisk]])
+
* [[AI safety contains some memetic hazards]] (e.g. distant superintelligence stuff, malign universal prior, most famously [[Roko's basilisk]]) -- I think this one affects a very small percentage (maybe 1% or less than that) of people who become interested in AI safety
  
 
==See also==
 
==See also==
  
 
* [[Emotional difficulties of spaced repetition]]
 
* [[Emotional difficulties of spaced repetition]]
 +
 +
==What links here==
 +
 +
{{Special:WhatLinksHere/{{FULLPAGENAME}} | hideredirs=1}}
  
 
[[Category:AI safety meta]]
 
[[Category:AI safety meta]]

Latest revision as of 18:25, 18 July 2021

listing out only the difficulties arising from the subject matter itself, without reference to the AI safety community

See also

What links here