Difference between revisions of "Emotional difficulties of AI safety research"
(4 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
listing out only the difficulties arising from the subject matter itself, without reference to the [[AI safety community]] | listing out only the difficulties arising from the subject matter itself, without reference to the [[AI safety community]] | ||
− | * [[Predicting the future is hard]] | + | * [[Predicting the future is hard, predicting a future with futuristic technology is even harder]] |
+ | * [[AI safety has many prerequisites]] | ||
* [[One cannot tinker with AGI safety because no AGI has been built yet]] | * [[One cannot tinker with AGI safety because no AGI has been built yet]] | ||
* [[Thinking about death is painful]] | * [[Thinking about death is painful]] | ||
* [[Abstract utilitarianish thinking can infect everyday life activities]] (see [[Anna Salamon]]'s post about this for more details) | * [[Abstract utilitarianish thinking can infect everyday life activities]] (see [[Anna Salamon]]'s post about this for more details) | ||
− | * [[AI safety contains some memetic hazards]] (e.g. distant superintelligence stuff, malign universal prior, most famously [[Roko's basilisk]]) | + | * [[AI safety contains some memetic hazards]] (e.g. distant superintelligence stuff, malign universal prior, most famously [[Roko's basilisk]]) -- I think this one affects a very small percentage (maybe 1% or less than that) of people who become interested in AI safety |
==See also== | ==See also== | ||
Line 11: | Line 12: | ||
* [[Emotional difficulties of spaced repetition]] | * [[Emotional difficulties of spaced repetition]] | ||
− | [[Category:AI safety | + | ==What links here== |
+ | |||
+ | {{Special:WhatLinksHere/{{FULLPAGENAME}} | hideredirs=1}} | ||
+ | |||
+ | [[Category:AI safety meta]] |
Latest revision as of 18:25, 18 July 2021
listing out only the difficulties arising from the subject matter itself, without reference to the AI safety community
- Predicting the future is hard, predicting a future with futuristic technology is even harder
- AI safety has many prerequisites
- One cannot tinker with AGI safety because no AGI has been built yet
- Thinking about death is painful
- Abstract utilitarianish thinking can infect everyday life activities (see Anna Salamon's post about this for more details)
- AI safety contains some memetic hazards (e.g. distant superintelligence stuff, malign universal prior, most famously Roko's basilisk) -- I think this one affects a very small percentage (maybe 1% or less than that) of people who become interested in AI safety