Difference between revisions of "AI prepping"
(→Concrete ideas) |
(→Concrete ideas) |
||
Line 18: | Line 18: | ||
* having a flexible schedule/free time so that you can take actions quickly | * having a flexible schedule/free time so that you can take actions quickly | ||
* keep watching the AI safety field / developments in AI | * keep watching the AI safety field / developments in AI | ||
+ | * does having a shelter/bunker or living in a remote part of the world (e.g. with no useful natural resources) help? | ||
* altruistic actions that reduce AI x-risk (this part overlaps with EA interventions) | * altruistic actions that reduce AI x-risk (this part overlaps with EA interventions) | ||
Revision as of 01:21, 13 March 2020
AI prepping refers to selfish actions one can take in order to survive when unaligned AGI is created. The term "prepping" comes from survivalism.
"As to the final step, I’m not claiming that AI is an extinction risk. I think that it’s not clear that even an AI that went badly wrong would want to kill everyone. I think that humans are the most interesting thing that it would have access to, possibly the most interesting thing in the affectable part of the Universe. But that doesn’t make a substantial change. I don’t think they’ll be saying, “Okay, humans, fill the Universe with love and flourishing and all the things you want”. Our future would be radically curtailed if we were just there as something for them."[1]
Concrete ideas
Unlike prepping for natural disasters/wars, AI prepping seems to have some big differences (e.g. storing food and water don't seem like they would help).
- invest in AI companies (and other companies?)
- network / become friends with people who know a lot about AI safety / have friends and family who can act as agents
- save/make a lot of money
- having a flexible schedule/free time so that you can take actions quickly
- keep watching the AI safety field / developments in AI
- does having a shelter/bunker or living in a remote part of the world (e.g. with no useful natural resources) help?
- altruistic actions that reduce AI x-risk (this part overlaps with EA interventions)
The prepping you do depends a lot on which AI takeoff scenarios you find most likely. e.g. in a yudkowskian hard takeoff, there probably isn't any action you can take other than working on AI alignment.