Difference between revisions of "AI prepping"
(Created page with ""As to the final step, I’m not claiming that AI is an extinction risk. I think that it’s not clear that even an AI that went badly wrong would want to kill everyone. I thi...") |
|||
Line 1: | Line 1: | ||
"As to the final step, I’m not claiming that AI is an extinction risk. I think that it’s not clear that even an AI that went badly wrong would want to kill everyone. I think that humans are the most interesting thing that it would have access to, possibly the most interesting thing in the affectable part of the Universe. But that doesn’t make a substantial change. I don’t think they’ll be saying, “Okay, humans, fill the Universe with love and flourishing and all the things you want”. Our future would be radically curtailed if we were just there as something for them."<ref>https://80000hours.org/podcast/episodes/toby-ord-the-precipice-existential-risk-future-humanity/</ref> | "As to the final step, I’m not claiming that AI is an extinction risk. I think that it’s not clear that even an AI that went badly wrong would want to kill everyone. I think that humans are the most interesting thing that it would have access to, possibly the most interesting thing in the affectable part of the Universe. But that doesn’t make a substantial change. I don’t think they’ll be saying, “Okay, humans, fill the Universe with love and flourishing and all the things you want”. Our future would be radically curtailed if we were just there as something for them."<ref>https://80000hours.org/podcast/episodes/toby-ord-the-precipice-existential-risk-future-humanity/</ref> | ||
+ | |||
+ | https://ea.greaterwrong.com/posts/tDk57GhrdK54TWzPY/i-m-buck-shlegeris-i-do-research-and-outreach-at-miri-ama/comment/n6okR737HZGaouAds | ||
+ | |||
+ | https://ea.greaterwrong.com/posts/7DhEnxBqP62jHmsAx/taking-ai-risk-seriously-thoughts-by-andrew-critch/comment/CgS7hKqzsfPrBRpA8 | ||
==References== | ==References== | ||
<references/> | <references/> |
Revision as of 01:26, 10 March 2020
"As to the final step, I’m not claiming that AI is an extinction risk. I think that it’s not clear that even an AI that went badly wrong would want to kill everyone. I think that humans are the most interesting thing that it would have access to, possibly the most interesting thing in the affectable part of the Universe. But that doesn’t make a substantial change. I don’t think they’ll be saying, “Okay, humans, fill the Universe with love and flourishing and all the things you want”. Our future would be radically curtailed if we were just there as something for them."[1]