Difference between revisions of "Emotional difficulties of AI safety research"
(Created page with "listing out only the difficulties arising from the subject matter itself, without reference to the AI safety community * Predicting the future is hard * One cannot...") |
|||
Line 5: | Line 5: | ||
* [[Thinking about death is painful]] | * [[Thinking about death is painful]] | ||
* [[Abstract utilitarianish thinking can infect everyday life activities]] (see [[Anna Salamon]]'s post about this for more details) | * [[Abstract utilitarianish thinking can infect everyday life activities]] (see [[Anna Salamon]]'s post about this for more details) | ||
+ | * [[AI safety contains some memetic hazards]] (e.g. distant superintelligence stuff, malign universal prior, most famously [[Roko's basilisk]]) | ||
==See also== | ==See also== |
Revision as of 19:18, 22 March 2021
listing out only the difficulties arising from the subject matter itself, without reference to the AI safety community
- Predicting the future is hard
- One cannot tinker with AGI safety because no AGI has been built yet
- Thinking about death is painful
- Abstract utilitarianish thinking can infect everyday life activities (see Anna Salamon's post about this for more details)
- AI safety contains some memetic hazards (e.g. distant superintelligence stuff, malign universal prior, most famously Roko's basilisk)