Difference between revisions of "Soft-hard takeoff"

From Issawiki
Jump to: navigation, search
 
(15 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
("Soft-hard takeoff" is a pretty horrible name; maybe something like "continuous takeoff + FOOM/locality" is better)
 +
 
a scenario i've been wondering about recently is where there is a stereotypical "soft takeoff" until around the point where the AI has somewhat-infra-human level general intelligence, and then once it crosses some threshold, a stereotypical "hard takeoff" happens.
 
a scenario i've been wondering about recently is where there is a stereotypical "soft takeoff" until around the point where the AI has somewhat-infra-human level general intelligence, and then once it crosses some threshold, a stereotypical "hard takeoff" happens.
  
 
the only places where i've found any discussion of anything similar to this:
 
the only places where i've found any discussion of anything similar to this:
  
* eric drexler [https://sideways-view.com/2018/02/24/takeoff-speeds/#comment-355 says]: """What I find extremely implausible are scenarios in which humanity confronts high-level AI without the prior emergence of potentially-strategically-decisive AI — that is, AI capabilities that are potentially decisive when employed by some group of ingenious, well-resourced human actors.
+
* [[Eric Drexler]] [https://sideways-view.com/2018/02/24/takeoff-speeds/#comment-355 says]: """What I find extremely implausible are scenarios in which humanity confronts high-level AI without the prior emergence of potentially-strategically-decisive AI — that is, AI capabilities that are potentially decisive when employed by some group of ingenious, well-resourced human actors.
  
 
:If we see something like “fast takeoff”, it is likely to occur in a world that is already far up the slope of a slow takeoff trajectory; if so, then many (though not all) of the key strategic considerations resemble those you’ve discussed in the context of slow-takeoff models.
 
:If we see something like “fast takeoff”, it is likely to occur in a world that is already far up the slope of a slow takeoff trajectory; if so, then many (though not all) of the key strategic considerations resemble those you’ve discussed in the context of slow-takeoff models.
Line 9: Line 11:
 
:The continued popularity of scenarios that posit fast takeoff with weak precursors is, I think, the result of a failure to update on the actual trajectory of AI development, or a failure of imagination in considering how intermediate levels of AI technology could be exploited."""
 
:The continued popularity of scenarios that posit fast takeoff with weak precursors is, I think, the result of a failure to update on the actual trajectory of AI development, or a failure of imagination in considering how intermediate levels of AI technology could be exploited."""
  
* In [https://gist.github.com/bshlgrs/a46f3fa25e2f9a8fbd07026de354fc22 this gist], buck asks "It seems like there might be various ways that the world could be radically transformed by narrow AI before it's transformed by AGI; why don't people talk about that? (Maybe CAIS is people talking about this?)"
+
* In [https://gist.github.com/bshlgrs/a46f3fa25e2f9a8fbd07026de354fc22 this gist], [[buck]] asks "It seems like there might be various ways that the world could be radically transformed by narrow AI before it's transformed by AGI; why don't people talk about that? (Maybe CAIS is people talking about this?)"
* Owen Cotton-Barratt: "Number 1 is the step I feel most sceptical about. It seems to me likely that the first AIs which can perform pivotal acts will not perform fully general consequentialist reasoning. I expect that they will perform consequentialist reasoning within certain domains (e.g. AlphaGo in some sense reasons about consequences of moves, but has no conception of consequences in the physical world). This isn’t enough to alleviate concern: some such domains might be general enough that something misbehaving in them would cause large problems. But it is enough for me to think that paying attention to scope of domains is a promising angle." [https://agentfoundations.org/item?id=1242]
+
* [[Owen Cotton-Barratt]]: "Number 1 is the step I feel most sceptical about. It seems to me likely that the first AIs which can perform pivotal acts will not perform fully general consequentialist reasoning. I expect that they will perform consequentialist reasoning within certain domains (e.g. AlphaGo in some sense reasons about consequences of moves, but has no conception of consequences in the physical world). This isn’t enough to alleviate concern: some such domains might be general enough that something misbehaving in them would cause large problems. But it is enough for me to think that paying attention to scope of domains is a promising angle." [https://agentfoundations.org/item?id=1242]
 +
* [[Daniel Kokotajlo]]: 'This model leaves out several important things. First, it leaves out the whole "intelligence explosion" idea: A project's innovation rate should increase as some function of how many innovations they have access to. Adding this in will make the situation more extreme and make the gap between the leading project and everyone else grow even bigger very quickly.' [https://lw2.issarice.com/posts/PKy8NuNPknenkDY74/soft-takeoff-can-still-lead-to-decisive-strategic-advantage]
 +
: "33. Weak AGI" in https://aiimpacts.org/relevant-pre-agi-possibilities/
 +
* The idea of a prepotent AI: http://acritch.com/media/arches.pdf
 +
* "Slow Takeoff Fast GWP Acceleration Scenario" in <ref>https://www.greaterwrong.com/posts/aFaKhG86tTrKvtAnT/against-gdp-as-a-metric-for-timelines-and-takeoff-speeds</ref>
  
 
I thought [https://www.greaterwrong.com/posts/ZFtesgbY9XwtqqyZ5/human-psycholinguists-a-critical-appraisal this post] about GPT-2 was pretty interesting: "But even this would be an important discovery – the discovery that huge swaths of what we consider most essential about language can be done “non-linguistically.” For every easy test that children pass and GPT-2 fails, there are hard tests GPT-2 passes which the scholars of 2001 would have thought far beyond the reach of any near-future machine. If this is the conclusion we’re drawing, it would imply a kind of paranoia about true linguistic ability, an insistence that one can do so much of it so well, can learn to write like spookily like Nabokov (or like me) given 12 books and 6 hours to chew on them … and yet still not be “the real thing,” not even a little bit. It would imply that there are language-like behaviors out there in logical space which aren’t language and which are nonetheless so much like it, non-trivially, beautifully, spine-chillingly like it." -- if many other things that humans can do are like language in this sense, we could start seeing crazy things happening in the world even though the AI "doesn't really understand" anything.
 
I thought [https://www.greaterwrong.com/posts/ZFtesgbY9XwtqqyZ5/human-psycholinguists-a-critical-appraisal this post] about GPT-2 was pretty interesting: "But even this would be an important discovery – the discovery that huge swaths of what we consider most essential about language can be done “non-linguistically.” For every easy test that children pass and GPT-2 fails, there are hard tests GPT-2 passes which the scholars of 2001 would have thought far beyond the reach of any near-future machine. If this is the conclusion we’re drawing, it would imply a kind of paranoia about true linguistic ability, an insistence that one can do so much of it so well, can learn to write like spookily like Nabokov (or like me) given 12 books and 6 hours to chew on them … and yet still not be “the real thing,” not even a little bit. It would imply that there are language-like behaviors out there in logical space which aren’t language and which are nonetheless so much like it, non-trivially, beautifully, spine-chillingly like it." -- if many other things that humans can do are like language in this sense, we could start seeing crazy things happening in the world even though the AI "doesn't really understand" anything.
 +
 +
"Continuous takeoff is a statement about what happens before we reach the point where a fast takeoff is supposed to happen, and is perfectly consistent with the claim that given the stated preconditions for fast takeoff, fast takeoff will happen. It’s a statement that serious problems, possibly serious enough to pose an existential threat, will show up before the window where we expect fast takeoff scenarios to occur." [https://lw2.issarice.com/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress#The_Outside_View] -- i guess this is the same sort of thing as what i am describing here. if so, then i agree that talking about "soft" vs "hard" takeoff is pretty misleading, and we should be talking about "shape prior to takeoff" (business as usual vs crazy shit happens), and then ''separately'' about the speed of takeoff (where it doesn't seem like paul and MIRI even disagree?). Also if this is the case, i don't see how anything [[eliezer]] writes actually contradicts what paul writes. like, i think eliezer is consistently only talking about the speed of takeoff, ''not'' the shape of progress prior to takeoff?
 +
 +
so what is paul's argument that once we reach a "takeoff level capability", we won't get a unipolar/[[decisive strategic advantage]] outcome? i think it's something like: because the runner-ups will have almost-as-capable AI, they can fight back somewhat effectively, and in a short time they will reach the current winner's capability (even while the current winner will be even ahead by then). it has to be something like this, since if we use the counterfactual in [https://lw2.issarice.com/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress] where a seed-level AI is suddenly dropped into today's world, i think even paul would say there would be a sudden unipolar outcome. See [[counterfactual of dropping a seed AI into a world without other capable AI]].
 +
 +
https://www.facebook.com/yudkowsky/posts/10154597918309228
  
 
==implications==
 
==implications==
Line 23: Line 35:
 
* "whoever develops AGI first has a massive advantage over the rest of the world and hence great freedom in choosing what to do with their invention" -- i think this is true
 
* "whoever develops AGI first has a massive advantage over the rest of the world and hence great freedom in choosing what to do with their invention" -- i think this is true
 
* "in slow takeoff scenarios, other actors will already have nearly-as-good-AGI, and a group that tries to use AGI in a very restricted or handicapped way won’t be able to take any pivotal action" -- not necessarily true; in soft-hard takeoff, AGI could still be much better than the narrow predecessors.
 
* "in slow takeoff scenarios, other actors will already have nearly-as-good-AGI, and a group that tries to use AGI in a very restricted or handicapped way won’t be able to take any pivotal action" -- not necessarily true; in soft-hard takeoff, AGI could still be much better than the narrow predecessors.
 +
 +
==See also==
 +
 +
* [[Will there be significant changes to the world prior to some critical AI capability threshold being reached?]]
 +
 +
[[Category:AI safety]]

Latest revision as of 01:43, 2 March 2021

("Soft-hard takeoff" is a pretty horrible name; maybe something like "continuous takeoff + FOOM/locality" is better)

a scenario i've been wondering about recently is where there is a stereotypical "soft takeoff" until around the point where the AI has somewhat-infra-human level general intelligence, and then once it crosses some threshold, a stereotypical "hard takeoff" happens.

the only places where i've found any discussion of anything similar to this:

  • Eric Drexler says: """What I find extremely implausible are scenarios in which humanity confronts high-level AI without the prior emergence of potentially-strategically-decisive AI — that is, AI capabilities that are potentially decisive when employed by some group of ingenious, well-resourced human actors.
If we see something like “fast takeoff”, it is likely to occur in a world that is already far up the slope of a slow takeoff trajectory; if so, then many (though not all) of the key strategic considerations resemble those you’ve discussed in the context of slow-takeoff models.
The continued popularity of scenarios that posit fast takeoff with weak precursors is, I think, the result of a failure to update on the actual trajectory of AI development, or a failure of imagination in considering how intermediate levels of AI technology could be exploited."""
  • In this gist, buck asks "It seems like there might be various ways that the world could be radically transformed by narrow AI before it's transformed by AGI; why don't people talk about that? (Maybe CAIS is people talking about this?)"
  • Owen Cotton-Barratt: "Number 1 is the step I feel most sceptical about. It seems to me likely that the first AIs which can perform pivotal acts will not perform fully general consequentialist reasoning. I expect that they will perform consequentialist reasoning within certain domains (e.g. AlphaGo in some sense reasons about consequences of moves, but has no conception of consequences in the physical world). This isn’t enough to alleviate concern: some such domains might be general enough that something misbehaving in them would cause large problems. But it is enough for me to think that paying attention to scope of domains is a promising angle." [1]
  • Daniel Kokotajlo: 'This model leaves out several important things. First, it leaves out the whole "intelligence explosion" idea: A project's innovation rate should increase as some function of how many innovations they have access to. Adding this in will make the situation more extreme and make the gap between the leading project and everyone else grow even bigger very quickly.' [2]
"33. Weak AGI" in https://aiimpacts.org/relevant-pre-agi-possibilities/

I thought this post about GPT-2 was pretty interesting: "But even this would be an important discovery – the discovery that huge swaths of what we consider most essential about language can be done “non-linguistically.” For every easy test that children pass and GPT-2 fails, there are hard tests GPT-2 passes which the scholars of 2001 would have thought far beyond the reach of any near-future machine. If this is the conclusion we’re drawing, it would imply a kind of paranoia about true linguistic ability, an insistence that one can do so much of it so well, can learn to write like spookily like Nabokov (or like me) given 12 books and 6 hours to chew on them … and yet still not be “the real thing,” not even a little bit. It would imply that there are language-like behaviors out there in logical space which aren’t language and which are nonetheless so much like it, non-trivially, beautifully, spine-chillingly like it." -- if many other things that humans can do are like language in this sense, we could start seeing crazy things happening in the world even though the AI "doesn't really understand" anything.

"Continuous takeoff is a statement about what happens before we reach the point where a fast takeoff is supposed to happen, and is perfectly consistent with the claim that given the stated preconditions for fast takeoff, fast takeoff will happen. It’s a statement that serious problems, possibly serious enough to pose an existential threat, will show up before the window where we expect fast takeoff scenarios to occur." [3] -- i guess this is the same sort of thing as what i am describing here. if so, then i agree that talking about "soft" vs "hard" takeoff is pretty misleading, and we should be talking about "shape prior to takeoff" (business as usual vs crazy shit happens), and then separately about the speed of takeoff (where it doesn't seem like paul and MIRI even disagree?). Also if this is the case, i don't see how anything eliezer writes actually contradicts what paul writes. like, i think eliezer is consistently only talking about the speed of takeoff, not the shape of progress prior to takeoff?

so what is paul's argument that once we reach a "takeoff level capability", we won't get a unipolar/decisive strategic advantage outcome? i think it's something like: because the runner-ups will have almost-as-capable AI, they can fight back somewhat effectively, and in a short time they will reach the current winner's capability (even while the current winner will be even ahead by then). it has to be something like this, since if we use the counterfactual in [4] where a seed-level AI is suddenly dropped into today's world, i think even paul would say there would be a sudden unipolar outcome. See counterfactual of dropping a seed AI into a world without other capable AI.

https://www.facebook.com/yudkowsky/posts/10154597918309228

implications

let's look at paul's "Why does this matter?" section in his takeoff speeds post.

  • "it will become quite obvious that AI is going to transform the world well before we kill ourselves" -- yes, this is true for soft-hard
  • "we will have some time to experiment with different approaches to safety" -- not necessarily? AGI could still be very different from the narrow kinds of AI, so that the same safety tricks won't work. or AGI is just much harder to align (because it understands things or whatever).
  • "policy-makers will have time to understand and respond to AI" -- i guess this is true
  • "whoever develops AGI first has a massive advantage over the rest of the world and hence great freedom in choosing what to do with their invention" -- i think this is true
  • "in slow takeoff scenarios, other actors will already have nearly-as-good-AGI, and a group that tries to use AGI in a very restricted or handicapped way won’t be able to take any pivotal action" -- not necessarily true; in soft-hard takeoff, AGI could still be much better than the narrow predecessors.

See also

  • https://www.greaterwrong.com/posts/aFaKhG86tTrKvtAnT/against-gdp-as-a-metric-for-timelines-and-takeoff-speeds