Difference between revisions of "AI prepping"

From Issawiki
Jump to: navigation, search
Line 1: Line 1:
'''AI prepping''' refers to selfish actions one can take in order to survive when unaligned AGI is created.
+
'''AI prepping''' refers to selfish actions one can take in order to survive when unaligned AGI is created. The term "prepping" comes from [[wikipedia:Survivalism|survivalism]].
  
 
"As to the final step, I’m not claiming that AI is an extinction risk. I think that it’s not clear that even an AI that went badly wrong would want to kill everyone. I think that humans are the most interesting thing that it would have access to, possibly the most interesting thing in the affectable part of the Universe. But that doesn’t make a substantial change. I don’t think they’ll be saying, “Okay, humans, fill the Universe with love and flourishing and all the things you want”. Our future would be radically curtailed if we were just there as something for them."<ref>https://80000hours.org/podcast/episodes/toby-ord-the-precipice-existential-risk-future-humanity/</ref>
 
"As to the final step, I’m not claiming that AI is an extinction risk. I think that it’s not clear that even an AI that went badly wrong would want to kill everyone. I think that humans are the most interesting thing that it would have access to, possibly the most interesting thing in the affectable part of the Universe. But that doesn’t make a substantial change. I don’t think they’ll be saying, “Okay, humans, fill the Universe with love and flourishing and all the things you want”. Our future would be radically curtailed if we were just there as something for them."<ref>https://80000hours.org/podcast/episodes/toby-ord-the-precipice-existential-risk-future-humanity/</ref>

Revision as of 01:28, 10 March 2020

AI prepping refers to selfish actions one can take in order to survive when unaligned AGI is created. The term "prepping" comes from survivalism.

"As to the final step, I’m not claiming that AI is an extinction risk. I think that it’s not clear that even an AI that went badly wrong would want to kill everyone. I think that humans are the most interesting thing that it would have access to, possibly the most interesting thing in the affectable part of the Universe. But that doesn’t make a substantial change. I don’t think they’ll be saying, “Okay, humans, fill the Universe with love and flourishing and all the things you want”. Our future would be radically curtailed if we were just there as something for them."[1]

https://ea.greaterwrong.com/posts/tDk57GhrdK54TWzPY/i-m-buck-shlegeris-i-do-research-and-outreach-at-miri-ama/comment/n6okR737HZGaouAds

https://ea.greaterwrong.com/posts/7DhEnxBqP62jHmsAx/taking-ai-risk-seriously-thoughts-by-andrew-critch/comment/CgS7hKqzsfPrBRpA8

References