Difference between revisions of "AI prepping"
(→Concrete ideas) |
|||
Line 11: | Line 11: | ||
==Concrete ideas== | ==Concrete ideas== | ||
− | * invest in AI companies | + | * invest in AI companies (and other companies?) |
* network / become friends with people who know a lot about AI safety / have friends and family who can act as agents | * network / become friends with people who know a lot about AI safety / have friends and family who can act as agents | ||
* save/make a lot of money | * save/make a lot of money |
Revision as of 01:15, 13 March 2020
AI prepping refers to selfish actions one can take in order to survive when unaligned AGI is created. The term "prepping" comes from survivalism.
"As to the final step, I’m not claiming that AI is an extinction risk. I think that it’s not clear that even an AI that went badly wrong would want to kill everyone. I think that humans are the most interesting thing that it would have access to, possibly the most interesting thing in the affectable part of the Universe. But that doesn’t make a substantial change. I don’t think they’ll be saying, “Okay, humans, fill the Universe with love and flourishing and all the things you want”. Our future would be radically curtailed if we were just there as something for them."[1]
Concrete ideas
- invest in AI companies (and other companies?)
- network / become friends with people who know a lot about AI safety / have friends and family who can act as agents
- save/make a lot of money
- having a flexible schedule/free time so that you can take actions quickly
- keep watching the AI safety field / developments in AI
- altruistic actions that reduce AI x-risk