Emotional difficulties of AI safety research
Revision as of 19:14, 22 March 2021 by Issa (talk | contribs) (Created page with "listing out only the difficulties arising from the subject matter itself, without reference to the AI safety community * Predicting the future is hard * One cannot...")
listing out only the difficulties arising from the subject matter itself, without reference to the AI safety community
- Predicting the future is hard
- One cannot tinker with AGI safety because no AGI has been built yet
- Thinking about death is painful
- Abstract utilitarianish thinking can infect everyday life activities (see Anna Salamon's post about this for more details)