Difference between revisions of "AI prepping"
(→Concrete ideas) |
|||
(13 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
'''AI prepping''' refers to selfish actions one can take in order to survive when unaligned AGI is created. The term "prepping" comes from [[wikipedia:Survivalism|survivalism]]. | '''AI prepping''' refers to selfish actions one can take in order to survive when unaligned AGI is created. The term "prepping" comes from [[wikipedia:Survivalism|survivalism]]. | ||
+ | |||
+ | It's not clear whether any really good actions for AI prepping exist. Some reasons for optimism are: | ||
+ | |||
+ | * Nobody has thought about this much, even within AI safety and effective altruist circles. (I've seen very little private discussion, and basically zero public discussion.) | ||
+ | * It would be pretty surprising if the best ''altruistic'' actions turn out to also be the best ''selfish'' actions, in terms of acting with regard to future AI. | ||
+ | |||
+ | usefulness of selfish actions for AI safety depends on how likely it is that ''some'' humans can do well without ''all'' humans doing well. I think I (and many AI safety people) are inclined to think that if ''some'' humans are doing well, then that's because we succeeded in alignment, which means ''all'' humans are doing well. | ||
"As to the final step, I’m not claiming that AI is an extinction risk. I think that it’s not clear that even an AI that went badly wrong would want to kill everyone. I think that humans are the most interesting thing that it would have access to, possibly the most interesting thing in the affectable part of the Universe. But that doesn’t make a substantial change. I don’t think they’ll be saying, “Okay, humans, fill the Universe with love and flourishing and all the things you want”. Our future would be radically curtailed if we were just there as something for them."<ref>https://80000hours.org/podcast/episodes/toby-ord-the-precipice-existential-risk-future-humanity/</ref> | "As to the final step, I’m not claiming that AI is an extinction risk. I think that it’s not clear that even an AI that went badly wrong would want to kill everyone. I think that humans are the most interesting thing that it would have access to, possibly the most interesting thing in the affectable part of the Universe. But that doesn’t make a substantial change. I don’t think they’ll be saying, “Okay, humans, fill the Universe with love and flourishing and all the things you want”. Our future would be radically curtailed if we were just there as something for them."<ref>https://80000hours.org/podcast/episodes/toby-ord-the-precipice-existential-risk-future-humanity/</ref> | ||
+ | |||
+ | [[Jaan Tallinn]]: "We’re just like statues, standing in place, and not doing anything. But yeah, if you look around, if the universe is real, then almost all of the resources are outside of this planet. So, the reason why … I think AI’s almost entirely interested in the rest of the universe rather than the Earth. The big problem is that it will, by default, use as many resources as it can on this planet, in order to get to the rest of the resources out there."<ref>https://manifoldlearning.com/episode-042/</ref> | ||
+ | |||
+ | Eliezer says "Killing all humans is the obvious, probably resource-minimal measure to prevent those humans from building another AGI inside the solar system, which could be genuinely problematic. The cost of a few micrograms of botulinum per human is really not that high and you get to reuse the diamondoid bacteria afterwards." in response to Jaan Tallinn suggesting that an unaligned AGI might leave humans alive.<ref>https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/oKYWbXioKaANATxKY</ref> | ||
https://ea.greaterwrong.com/posts/tDk57GhrdK54TWzPY/i-m-buck-shlegeris-i-do-research-and-outreach-at-miri-ama/comment/n6okR737HZGaouAds | https://ea.greaterwrong.com/posts/tDk57GhrdK54TWzPY/i-m-buck-shlegeris-i-do-research-and-outreach-at-miri-ama/comment/n6okR737HZGaouAds | ||
Line 8: | Line 19: | ||
https://lw2.issarice.com/posts/yuyvosHTgjDR4giPZ/why-don-t-singularitarians-bet-on-the-creation-of-agi-by | https://lw2.issarice.com/posts/yuyvosHTgjDR4giPZ/why-don-t-singularitarians-bet-on-the-creation-of-agi-by | ||
+ | |||
+ | https://www.lesswrong.com/posts/dZoXpSa3WehwqCf2m/engaging-seriously-with-short-timelines | ||
+ | |||
+ | https://lw2.issarice.com/posts/4FhiSuNv4QbtKDzL8/how-can-i-bet-on-short-timelines -- i don't understand [[Daniel Kokotajlo]]'s reasoning here. If he can make the "bet" with other aligned people, why can't he just pay those same people without the bet? that would be a good use of money. | ||
+ | |||
+ | https://eaforum.issarice.com/posts/DDTYxpK42B495MPqM/how-can-i-bet-on-short-timelines | ||
+ | |||
+ | "I think the market just doesn't put much probability on a crazy AI boom anytime soon. If you expect such a boom then there are plenty of bets you probably want to make. (I am personally short US 30-year debt, though it's a very small part of my AI-boom portfolio.)" https://forum.effectivealtruism.org/posts/KdxGwxwY3t7iw9xjB/three-impacts-of-machine-intelligence?commentId=aYCNP5PgDYsZpxsbX | ||
+ | |||
+ | https://www.greaterwrong.com/posts/EvyPaYZJ5sdrXeMwS/we-need-a-standard-set-of-community-advice-for-how-to | ||
+ | |||
+ | https://www.lesswrong.com/posts/xupJnpdRjbExwdb8J/how-do-ai-timelines-affect-how-you-live-your-life | ||
==Concrete ideas== | ==Concrete ideas== | ||
Line 14: | Line 37: | ||
* invest in AI companies (and other companies?) | * invest in AI companies (and other companies?) | ||
− | * network / become friends with people who know a lot about AI safety / have friends and family who can act as agents | + | * network / become friends with people who know a lot about AI safety / have friends and family who can act as your agents |
* save/make a lot of money | * save/make a lot of money | ||
* having a flexible schedule/free time so that you can take actions quickly | * having a flexible schedule/free time so that you can take actions quickly | ||
Line 23: | Line 46: | ||
The prepping you do depends a lot on which AI takeoff scenarios you find most likely. e.g. in a yudkowskian hard takeoff, there probably isn't any action you can take other than working on AI alignment. | The prepping you do depends a lot on which AI takeoff scenarios you find most likely. e.g. in a yudkowskian hard takeoff, there probably isn't any action you can take other than working on AI alignment. | ||
− | one thing i don't understand at all is: when would the AI "leave earth" and go off to colonize the stars? and once it does that, would it basically just leave earth alone? or would it keep some copies of it on earth/continue to extract all useful resources on earth? or maybe the earth isn't so useful, but it would take apart the sun, in which case everyone on earth is probably screwed? or would the same logic apply to the sun (i.e. the sun is so small in the grand scheme of things that the AI would leave it alone in order to colonize the stars)? | + | one thing i don't understand at all is: when would the AI "leave earth" and go off to colonize the stars? and once it does that, would it basically just leave earth alone? or would it keep some copies of it on earth/continue to extract all useful resources on earth? or maybe the earth isn't so useful, but it would take apart the sun, in which case everyone on earth is probably screwed? or would the same logic apply to the sun (i.e. the sun is so small in the grand scheme of things that the AI would leave it alone in order to colonize the stars)? If i was roleplaying this AI, it sure seems like I would want to hunt down and kill all the humans, because if I left them alone they could build up their technology, eventually build an aligned AI, and then mess up some stuff I am trying to do. So my choices are (1) hunt them down now so there is nothing to worry about; or (2) monitor them forever, and just intervene whenever things start to get spicy. My guess is (1) is easier. |
==References== | ==References== | ||
<references/> | <references/> | ||
+ | |||
+ | [[Category:AI safety]] |
Latest revision as of 04:20, 26 November 2022
AI prepping refers to selfish actions one can take in order to survive when unaligned AGI is created. The term "prepping" comes from survivalism.
It's not clear whether any really good actions for AI prepping exist. Some reasons for optimism are:
- Nobody has thought about this much, even within AI safety and effective altruist circles. (I've seen very little private discussion, and basically zero public discussion.)
- It would be pretty surprising if the best altruistic actions turn out to also be the best selfish actions, in terms of acting with regard to future AI.
usefulness of selfish actions for AI safety depends on how likely it is that some humans can do well without all humans doing well. I think I (and many AI safety people) are inclined to think that if some humans are doing well, then that's because we succeeded in alignment, which means all humans are doing well.
"As to the final step, I’m not claiming that AI is an extinction risk. I think that it’s not clear that even an AI that went badly wrong would want to kill everyone. I think that humans are the most interesting thing that it would have access to, possibly the most interesting thing in the affectable part of the Universe. But that doesn’t make a substantial change. I don’t think they’ll be saying, “Okay, humans, fill the Universe with love and flourishing and all the things you want”. Our future would be radically curtailed if we were just there as something for them."[1]
Jaan Tallinn: "We’re just like statues, standing in place, and not doing anything. But yeah, if you look around, if the universe is real, then almost all of the resources are outside of this planet. So, the reason why … I think AI’s almost entirely interested in the rest of the universe rather than the Earth. The big problem is that it will, by default, use as many resources as it can on this planet, in order to get to the rest of the resources out there."[2]
Eliezer says "Killing all humans is the obvious, probably resource-minimal measure to prevent those humans from building another AGI inside the solar system, which could be genuinely problematic. The cost of a few micrograms of botulinum per human is really not that high and you get to reuse the diamondoid bacteria afterwards." in response to Jaan Tallinn suggesting that an unaligned AGI might leave humans alive.[3]
https://www.lesswrong.com/posts/dZoXpSa3WehwqCf2m/engaging-seriously-with-short-timelines
https://lw2.issarice.com/posts/4FhiSuNv4QbtKDzL8/how-can-i-bet-on-short-timelines -- i don't understand Daniel Kokotajlo's reasoning here. If he can make the "bet" with other aligned people, why can't he just pay those same people without the bet? that would be a good use of money.
https://eaforum.issarice.com/posts/DDTYxpK42B495MPqM/how-can-i-bet-on-short-timelines
"I think the market just doesn't put much probability on a crazy AI boom anytime soon. If you expect such a boom then there are plenty of bets you probably want to make. (I am personally short US 30-year debt, though it's a very small part of my AI-boom portfolio.)" https://forum.effectivealtruism.org/posts/KdxGwxwY3t7iw9xjB/three-impacts-of-machine-intelligence?commentId=aYCNP5PgDYsZpxsbX
https://www.lesswrong.com/posts/xupJnpdRjbExwdb8J/how-do-ai-timelines-affect-how-you-live-your-life
Concrete ideas
Unlike prepping for natural disasters/wars, AI prepping seems to have some big differences (e.g. storing food and water don't seem like they would help).
- invest in AI companies (and other companies?)
- network / become friends with people who know a lot about AI safety / have friends and family who can act as your agents
- save/make a lot of money
- having a flexible schedule/free time so that you can take actions quickly
- keep watching the AI safety field / developments in AI
- does having a shelter/bunker or living in a remote part of the world (e.g. with no useful natural resources) help?
- altruistic actions that reduce AI x-risk (this part overlaps with EA interventions)
The prepping you do depends a lot on which AI takeoff scenarios you find most likely. e.g. in a yudkowskian hard takeoff, there probably isn't any action you can take other than working on AI alignment.
one thing i don't understand at all is: when would the AI "leave earth" and go off to colonize the stars? and once it does that, would it basically just leave earth alone? or would it keep some copies of it on earth/continue to extract all useful resources on earth? or maybe the earth isn't so useful, but it would take apart the sun, in which case everyone on earth is probably screwed? or would the same logic apply to the sun (i.e. the sun is so small in the grand scheme of things that the AI would leave it alone in order to colonize the stars)? If i was roleplaying this AI, it sure seems like I would want to hunt down and kill all the humans, because if I left them alone they could build up their technology, eventually build an aligned AI, and then mess up some stuff I am trying to do. So my choices are (1) hunt them down now so there is nothing to worry about; or (2) monitor them forever, and just intervene whenever things start to get spicy. My guess is (1) is easier.