Progress in self-improvement

From Issawiki
Jump to: navigation, search

One of the differences in visualization between proponents of hard takeoff and proponents of continuous takeoff is on how discretely an AI will obtain the ability to self-improve. Hard takeoff proponents seem to locate a specific moment in time when an AI becomes capable of self-improvement (and then immediately starting a FOOM), whereas continuous takeoff proponents visualize progress in self-improvement as gradual: before there is an AI that is good at self-improvement, there is an AI that is somewhat good at self-improvement, and so on.

Continuous takeoff quotes

"Also an AI which recursively improves itself forever will probably be preceded by AIs which self improve to a lesser extent, so the field will be moving fast already."[1]

"If you take the self-improving software – of course, we have software that self improves, it just does a lousy job of it. If you imagine steady improvement in the self-improvement, that doesn't give a local team a strong advantage. You have to imagine that there's some clever insight that gives a local team a vast, cosmically vast, advantage in its ability to self-improve compared to the other teams such that not only can it self improve, but it self improves like gangbusters in a very short time."[2]

"before we have AI that radically accelerates AI development, the slow takeoff argument suggests we will have AI that significantly accelerates AI development (and before that, slightly accelerates development). That is, an AI is just another, faster step in the hyperbolic growth we are currently experiencing, which corresponds to a further increase in rate but not a discontinuity (or even a discontinuity in rate)."[3]

"Eliezer seems to have, and this page seems to reflect, strong intuitions about "self-modification" beyond what you would expect from synonymy with "AI systems doing AI design and implementation." In my view of the world, there is no meaningful distinction between these things, and this post sounds confused. I think it would be worth pushing more on this divergence."[4]

See also

  • Secret sauce for intelligence
  • Missing gear for intelligence
  • Narrow window argument against continuous takeoff -- this might be one argument against believing self-improvement progress is gradual; if there is some "self-improvement ability parameter", then it might be that there is a very narrow range between "self-improvement doesn't totally suck" and "self-improvement is narrowly superhuman", i.e. most values of the parameter result in self-improvement ability either being completely useless or strongly superhuman. Or maybe more like, progress in self-improvement is still continuous, but a small initial improvement in self-improvement ability could potentially rapidly cascade into something that out-runs all other AI projects.

References