Difference between revisions of "Soft-hard takeoff"

From Issawiki
Jump to: navigation, search
Line 1: Line 1:
 
a scenario i've been wondering about recently is where there is a stereotypical "soft takeoff" until around the point where the AI has somewhat-infra-human level general intelligence, and then once it crosses some threshold, a stereotypical "hard takeoff" happens.
 
a scenario i've been wondering about recently is where there is a stereotypical "soft takeoff" until around the point where the AI has somewhat-infra-human level general intelligence, and then once it crosses some threshold, a stereotypical "hard takeoff" happens.
  
the only places where i've found any discussion of this:
+
the only places where i've found any discussion of anything similar to this:
  
 
* eric drexler [https://sideways-view.com/2018/02/24/takeoff-speeds/#comment-355 says]: """What I find extremely implausible are scenarios in which humanity confronts high-level AI without the prior emergence of potentially-strategically-decisive AI — that is, AI capabilities that are potentially decisive when employed by some group of ingenious, well-resourced human actors.
 
* eric drexler [https://sideways-view.com/2018/02/24/takeoff-speeds/#comment-355 says]: """What I find extremely implausible are scenarios in which humanity confronts high-level AI without the prior emergence of potentially-strategically-decisive AI — that is, AI capabilities that are potentially decisive when employed by some group of ingenious, well-resourced human actors.

Revision as of 07:00, 18 February 2020

a scenario i've been wondering about recently is where there is a stereotypical "soft takeoff" until around the point where the AI has somewhat-infra-human level general intelligence, and then once it crosses some threshold, a stereotypical "hard takeoff" happens.

the only places where i've found any discussion of anything similar to this:

  • eric drexler says: """What I find extremely implausible are scenarios in which humanity confronts high-level AI without the prior emergence of potentially-strategically-decisive AI — that is, AI capabilities that are potentially decisive when employed by some group of ingenious, well-resourced human actors.
If we see something like “fast takeoff”, it is likely to occur in a world that is already far up the slope of a slow takeoff trajectory; if so, then many (though not all) of the key strategic considerations resemble those you’ve discussed in the context of slow-takeoff models.
The continued popularity of scenarios that posit fast takeoff with weak precursors is, I think, the result of a failure to update on the actual trajectory of AI development, or a failure of imagination in considering how intermediate levels of AI technology could be exploited."""
  • In this gist, buck asks "It seems like there might be various ways that the world could be radically transformed by narrow AI before it's transformed by AGI; why don't people talk about that? (Maybe CAIS is people talking about this?)"

I thought this post about GPT-2 was pretty interesting: "But even this would be an important discovery – the discovery that huge swaths of what we consider most essential about language can be done “non-linguistically.” For every easy test that children pass and GPT-2 fails, there are hard tests GPT-2 passes which the scholars of 2001 would have thought far beyond the reach of any near-future machine. If this is the conclusion we’re drawing, it would imply a kind of paranoia about true linguistic ability, an insistence that one can do so much of it so well, can learn to write like spookily like Nabokov (or like me) given 12 books and 6 hours to chew on them … and yet still not be “the real thing,” not even a little bit. It would imply that there are language-like behaviors out there in logical space which aren’t language and which are nonetheless so much like it, non-trivially, beautifully, spine-chillingly like it." -- if many other things that humans can do are like language in this sense, we could start seeing crazy things happening in the world even though the AI "doesn't really understand" anything.