Difference between revisions of "Selection effect for who builds AGI"
(Created page with ""But also, what I’m more worried about is that the arguments will always be a bit uncertain, and that they will be the kind of arguments that maybe should push a rational pe...") |
|||
Line 1: | Line 1: | ||
"But also, what I’m more worried about is that the arguments will always be a bit uncertain, and that they will be the kind of arguments that maybe should push a rational person to think that there’s a 20% chance that this will all go wrong, but that some people would just be willing to take such a 20% chance. Or, that they will be selected for being the people… Not everyone will say it’s exactly 20%, some people will say it’s 50%, some people will say it’s 2%. It turns out the ones who think it’s small will be the ones who then unilaterally make these actions, when there are pressures to develop these technologies, economic pressures, or social, or military pressures." [https://80000hours.org/podcast/episodes/toby-ord-the-precipice-existential-risk-future-humanity/] | "But also, what I’m more worried about is that the arguments will always be a bit uncertain, and that they will be the kind of arguments that maybe should push a rational person to think that there’s a 20% chance that this will all go wrong, but that some people would just be willing to take such a 20% chance. Or, that they will be selected for being the people… Not everyone will say it’s exactly 20%, some people will say it’s 50%, some people will say it’s 2%. It turns out the ones who think it’s small will be the ones who then unilaterally make these actions, when there are pressures to develop these technologies, economic pressures, or social, or military pressures." [https://80000hours.org/podcast/episodes/toby-ord-the-precipice-existential-risk-future-humanity/] | ||
+ | |||
+ | [[Category:AI safety]] |
Latest revision as of 19:18, 27 February 2021
"But also, what I’m more worried about is that the arguments will always be a bit uncertain, and that they will be the kind of arguments that maybe should push a rational person to think that there’s a 20% chance that this will all go wrong, but that some people would just be willing to take such a 20% chance. Or, that they will be selected for being the people… Not everyone will say it’s exactly 20%, some people will say it’s 50%, some people will say it’s 2%. It turns out the ones who think it’s small will be the ones who then unilaterally make these actions, when there are pressures to develop these technologies, economic pressures, or social, or military pressures." [1]