Difference between revisions of "Emotional difficulties of AI safety research"
Line 11: | Line 11: | ||
* [[Emotional difficulties of spaced repetition]] | * [[Emotional difficulties of spaced repetition]] | ||
− | [[Category:AI safety | + | [[Category:AI safety meta]] |
Revision as of 19:18, 22 March 2021
listing out only the difficulties arising from the subject matter itself, without reference to the AI safety community
- Predicting the future is hard
- One cannot tinker with AGI safety because no AGI has been built yet
- Thinking about death is painful
- Abstract utilitarianish thinking can infect everyday life activities (see Anna Salamon's post about this for more details)
- AI safety contains some memetic hazards (e.g. distant superintelligence stuff, malign universal prior, most famously Roko's basilisk)