Difference between revisions of "Missing gear for intelligence"
Line 8: | Line 8: | ||
* "understanding is discontinuous" is the least general (most specific) because it says the final piece is about understanding | * "understanding is discontinuous" is the least general (most specific) because it says the final piece is about understanding | ||
− | * "missing gear" is in the middle in terms of generality because it says the final piece is a "gear", so some structurally distinct thing? | + | * "missing gear" is in the middle in terms of generality because it says the final piece is a "gear", so some structurally distinct thing? -- I think "missing gear" is also somewhat ambiguous about whether the final piece is big vs small |
* "payoff thresholds" is the most general because it makes no assumption about the nature of the final piece, just that there is some imaginary line, and once you cross it something happens | * "payoff thresholds" is the most general because it makes no assumption about the nature of the final piece, just that there is some imaginary line, and once you cross it something happens | ||
Revision as of 04:49, 9 June 2020
Missing gear for intelligence (also called one wrong number problem, step function, understanding is discontinuous, payoff thresholds argument) is an argument for a discontinuity in AI takeoff. Unlike a secret sauce for intelligence, the missing gear argument does not require that the final part of AI development be a huge conceptual breakthrough.
In IEM Eliezer writes "If the nearest competitor was previously only seven days behind, these seven days have now been amplified into a technological gulf enabling the leading AI to shut down, sandbox, or restrict the growth of any competitors it wishes to fetter."[1] The idea that a seven-day lead can result in a local foom makes me think Eliezer does not require the final missing gear to be a huge conceptual breakthrough.
Another way of talking about missing gear is to talk about a discontinuity in the usefulness of the AI (i.e. "payoff thresholds"). e.g. at IQ 30 the AI is completely useless for AI research, but then at IQ 35 it's suddenly actually helpful (because it can finally automate some particular part of doing AI research). Then the first project that gets to that point can suddenly grow past everyone else. I'm not sure that missing gear and payoff thresholds are actually logically equivalent (check this).
actually, here's what I think now:
- "understanding is discontinuous" is the least general (most specific) because it says the final piece is about understanding
- "missing gear" is in the middle in terms of generality because it says the final piece is a "gear", so some structurally distinct thing? -- I think "missing gear" is also somewhat ambiguous about whether the final piece is big vs small
- "payoff thresholds" is the most general because it makes no assumption about the nature of the final piece, just that there is some imaginary line, and once you cross it something happens
(Another different but related possibility is that someone team suddenly realizes a particular use case of AI in doing AI research. e.g. maybe lots of people have IQ 35 AIs, but one project realizes you can use them to speed up some particular part of research (before the other projects do) and suddenly they get a huge lead.)
"But this conversation did get me thinking about the topic of culturally transmitted software that contributes to human general intelligence. That software can be an *important* gear even if it's an algorithmically shallow part of the overall machinery. Removing a few simple gears that are 2% of a machine's mass can reduce the machine's performance by way more than 2%. Feral children would be the case in point." "But as necessary as it may be to avoid feral children, this kind of shallow soft-software doesn't strike me as something that takes a long time to redevelop, compared to hard-software like the secrets of computational neuroscience." [1] -- so in places like these, it makes it sound like Eliezer does expect a huge conceptual breakthrough near the end (right before AGI).
Wei Dai: "The biological von Neumann's brain must have been architecturally very similar to a typical university professor's. Nor could it have contained independent improvements to lots of different modules. Given this, I speculate that obtaining the analogous improvement in AI intelligence may only require a few design tweaks, which a relatively small project could find first by being luckier or a bit more talented than everyone else." [2]
External links
- Takeoff speeds § “Understanding” is discontinuous
- https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/#Payoff_thresholds