Difference between revisions of "Narrow window argument against continuous takeoff"
(4 intermediate revisions by the same user not shown) | |||
Line 9: | Line 9: | ||
narrow window argument against soft takeoff: eliezer says at some points that you need the parameter k that controls the growth to be in a really narrow range for it to NOT go into either a FOOM or petering out. in contrast, i think buck/paul has said something like, if you try to model it with math, you typically get a soft takeoff? what's going on here? | narrow window argument against soft takeoff: eliezer says at some points that you need the parameter k that controls the growth to be in a really narrow range for it to NOT go into either a FOOM or petering out. in contrast, i think buck/paul has said something like, if you try to model it with math, you typically get a soft takeoff? what's going on here? | ||
+ | |||
+ | counter arguement? https://ea.greaterwrong.com/posts/tDk57GhrdK54TWzPY/i-m-buck-shlegeris-i-do-research-and-outreach-at-miri-ama/comment/R6HdmWSx9WfM3Dciu | ||
+ | |||
+ | This part might also be relevant:<ref>https://www.facebook.com/yudkowsky/posts/10154018209759228</ref> | ||
+ | |||
+ | <blockquote><p>Human-equivalent competence is a small and undistinguished region in possibility-space. | ||
+ | <p>As I tweeted early on when the first game still seemed in doubt, "Thing that would surprise me most about #alphago vs. #sedol: for either player to win by three games instead of four or five." | ||
+ | <p>Since Deepmind picked a particular challenge time in advance, rather than challenging at a point where their AI seemed just barely good enough, it was improbable that they'd make *exactly* enough progress to give Sedol a nearly even fight. | ||
+ | <p>AI is either overwhelmingly stupider or overwhelmingly smarter than you. The more other AI progress and the greater the hardware overhang, the less time you spend in the narrow space between these regions. There was a time when AIs were roughly as good as the best human Go-players, and it was a week in late January.</blockquote> | ||
+ | |||
+ | [https://www.greaterwrong.com/posts/66FKFkWAugS8diydF/modelling-continuous-progress This post] can be read as an argument against a narrow window, since it introduces a new parameter and shows that we get continuous takeoff for a large range of that parameter. | ||
+ | |||
+ | ==See also== | ||
+ | |||
+ | * [[Progress in self-improvement]] | ||
==References== | ==References== |
Latest revision as of 16:56, 24 June 2020
Narrow window argument against continuous takeoff (also continuous takeoff keyhole) says ...
"When you fold a whole chain of differential equations in on itself like this, it should either peter out rapidly as improvements fail to yield further improvements, or else go FOOM. An exactly right law of diminishing returns that lets the system fly through the soft takeoff keyhole is unlikely - far more unlikely than seeing such behavior in a system with a roughly-constant underlying optimizer, like evolution improving brains, or human brains improving technology. Our present life is no good indicator of things to come."[1]
A secret of a lot of the futurism I'm willing to try and put any weight on, is that it involves the startling, amazing, counterintuitive prediction that something ends up in the not-human space instead of the human space - humans think their keyholes are the whole universe, because it's all they have experience with. So if you say, "It's in the (much larger) not-human space" it sounds like an amazing futuristic prediction and people will be shocked, and try to dispute it. But livable temperatures are rare in the universe - most of it's either much colder or much hotter. A place like Earth is an anomaly, though it's the only place beings like us can live; the interior of a star is much denser than the materials of the world we know, and the rest of the universe is much closer to vacuum.
So really, the whole hard takeoff analysis of "flatline or FOOM" just ends up saying, "the AI will not hit the human timescale keyhole." From our perspective, an AI will either be so slow as to be bottlenecked, or so fast as to be FOOM. When you look at it that way, it's not so radical a prediction, is it?[2]
I even want to say that the functions and curves being such as to allow hitting the soft takeoff keyhole, is ruled out by observed history to date. But there are small conceivable loopholes, like "maybe all the curves change drastically and completely as soon as we get past the part we know about in order to give us exactly the right anthropomorphic final outcome", or "maybe the trajectory for insightful optimization of intelligence has a law of diminishing returns where blind evolution gets accelerating returns".[3]
narrow window argument against soft takeoff: eliezer says at some points that you need the parameter k that controls the growth to be in a really narrow range for it to NOT go into either a FOOM or petering out. in contrast, i think buck/paul has said something like, if you try to model it with math, you typically get a soft takeoff? what's going on here?
counter arguement? https://ea.greaterwrong.com/posts/tDk57GhrdK54TWzPY/i-m-buck-shlegeris-i-do-research-and-outreach-at-miri-ama/comment/R6HdmWSx9WfM3Dciu
This part might also be relevant:[4]
Human-equivalent competence is a small and undistinguished region in possibility-space.
As I tweeted early on when the first game still seemed in doubt, "Thing that would surprise me most about #alphago vs. #sedol: for either player to win by three games instead of four or five."
Since Deepmind picked a particular challenge time in advance, rather than challenging at a point where their AI seemed just barely good enough, it was improbable that they'd make *exactly* enough progress to give Sedol a nearly even fight.
AI is either overwhelmingly stupider or overwhelmingly smarter than you. The more other AI progress and the greater the hardware overhang, the less time you spend in the narrow space between these regions. There was a time when AIs were roughly as good as the best human Go-players, and it was a week in late January.
This post can be read as an argument against a narrow window, since it introduces a new parameter and shows that we get continuous takeoff for a large range of that parameter.