Difference between revisions of "Emotional difficulties of AI safety research"
Line 1: | Line 1: | ||
listing out only the difficulties arising from the subject matter itself, without reference to the [[AI safety community]] | listing out only the difficulties arising from the subject matter itself, without reference to the [[AI safety community]] | ||
− | * [[Predicting the future is hard]] | + | * [[Predicting the future is hard, predicting a future with futuristic technology is even harder]] |
* [[One cannot tinker with AGI safety because no AGI has been built yet]] | * [[One cannot tinker with AGI safety because no AGI has been built yet]] | ||
* [[Thinking about death is painful]] | * [[Thinking about death is painful]] |
Revision as of 19:23, 22 March 2021
listing out only the difficulties arising from the subject matter itself, without reference to the AI safety community
- Predicting the future is hard, predicting a future with futuristic technology is even harder
- One cannot tinker with AGI safety because no AGI has been built yet
- Thinking about death is painful
- Abstract utilitarianish thinking can infect everyday life activities (see Anna Salamon's post about this for more details)
- AI safety contains some memetic hazards (e.g. distant superintelligence stuff, malign universal prior, most famously Roko's basilisk)