Difference between revisions of "AI prepping"

From Issawiki
Jump to: navigation, search
(Concrete ideas)
(Concrete ideas)
Line 22: Line 22:
  
 
The prepping you do depends a lot on which AI takeoff scenarios you find most likely. e.g. in a yudkowskian hard takeoff, there probably isn't any action you can take other than working on AI alignment.
 
The prepping you do depends a lot on which AI takeoff scenarios you find most likely. e.g. in a yudkowskian hard takeoff, there probably isn't any action you can take other than working on AI alignment.
 +
 +
one thing i don't understand at all is: when would the AI "leave earth" and go off to colonize the stars? and once it does that, would it basically just leave earth alone? or would it keep some copies of it on earth/continue to extract all useful resources on earth? or maybe the  earth isn't so useful, but it would take apart the sun, in which case everyone on earth is probably screwed? or would the same logic apply to the sun (i.e. the sun is so small in the grand scheme of things that the AI would leave it alone in order to colonize the stars)?
  
 
==References==
 
==References==
  
 
<references/>
 
<references/>

Revision as of 01:39, 13 March 2020

AI prepping refers to selfish actions one can take in order to survive when unaligned AGI is created. The term "prepping" comes from survivalism.

"As to the final step, I’m not claiming that AI is an extinction risk. I think that it’s not clear that even an AI that went badly wrong would want to kill everyone. I think that humans are the most interesting thing that it would have access to, possibly the most interesting thing in the affectable part of the Universe. But that doesn’t make a substantial change. I don’t think they’ll be saying, “Okay, humans, fill the Universe with love and flourishing and all the things you want”. Our future would be radically curtailed if we were just there as something for them."[1]

https://ea.greaterwrong.com/posts/tDk57GhrdK54TWzPY/i-m-buck-shlegeris-i-do-research-and-outreach-at-miri-ama/comment/n6okR737HZGaouAds

https://ea.greaterwrong.com/posts/7DhEnxBqP62jHmsAx/taking-ai-risk-seriously-thoughts-by-andrew-critch/comment/CgS7hKqzsfPrBRpA8

https://lw2.issarice.com/posts/yuyvosHTgjDR4giPZ/why-don-t-singularitarians-bet-on-the-creation-of-agi-by

Concrete ideas

Unlike prepping for natural disasters/wars, AI prepping seems to have some big differences (e.g. storing food and water don't seem like they would help).

  • invest in AI companies (and other companies?)
  • network / become friends with people who know a lot about AI safety / have friends and family who can act as agents
  • save/make a lot of money
  • having a flexible schedule/free time so that you can take actions quickly
  • keep watching the AI safety field / developments in AI
  • does having a shelter/bunker or living in a remote part of the world (e.g. with no useful natural resources) help?
  • altruistic actions that reduce AI x-risk (this part overlaps with EA interventions)

The prepping you do depends a lot on which AI takeoff scenarios you find most likely. e.g. in a yudkowskian hard takeoff, there probably isn't any action you can take other than working on AI alignment.

one thing i don't understand at all is: when would the AI "leave earth" and go off to colonize the stars? and once it does that, would it basically just leave earth alone? or would it keep some copies of it on earth/continue to extract all useful resources on earth? or maybe the earth isn't so useful, but it would take apart the sun, in which case everyone on earth is probably screwed? or would the same logic apply to the sun (i.e. the sun is so small in the grand scheme of things that the AI would leave it alone in order to colonize the stars)?

References