Difference between revisions of "Missing gear for intelligence"

From Issawiki
Jump to: navigation, search
(Evidence)
 
(11 intermediate revisions by the same user not shown)
Line 1: Line 1:
'''Missing gear for intelligence''' (also called '''[[one wrong number problem]]''', '''step function''', '''understanding is discontinuous''', '''payoff thresholds argument''') is an argument for a [[Discontinuous takeoff|discontinuity]] in AI takeoff. Unlike a [[secret sauce for intelligence]], the missing gear argument does not require that the final part of AI development be a huge conceptual breakthrough; instead, the final piece can be minor design tweaks, a payoff threshold in intelligence, or understanding being discontinuous.
+
'''Missing gear for intelligence''' (also called '''[[one wrong number problem]]''', '''step function''', '''understanding is discontinuous''', '''payoff thresholds argument''') is an argument for a [[Discontinuous takeoff|discontinuity]] in AI takeoff. Unlike a [[secret sauce for intelligence]], the missing gear argument does not require that the final part of AI development be a huge conceptual breakthrough; instead, the final piece is small but nevertheless results in a discontinuity (see Variants section below for details).
 +
 
 +
==Variants==
  
 
Another way of talking about missing gear is to talk about a discontinuity in the usefulness of the AI (i.e. "payoff thresholds"). e.g. at IQ 30 the AI is completely useless for AI research, but then at IQ 35 it's suddenly actually helpful (because it can finally automate some particular part of doing AI research). Then the first project that gets to that point can suddenly grow past everyone else. I'm not sure that missing gear and payoff thresholds are actually logically equivalent (check this).
 
Another way of talking about missing gear is to talk about a discontinuity in the usefulness of the AI (i.e. "payoff thresholds"). e.g. at IQ 30 the AI is completely useless for AI research, but then at IQ 35 it's suddenly actually helpful (because it can finally automate some particular part of doing AI research). Then the first project that gets to that point can suddenly grow past everyone else. I'm not sure that missing gear and payoff thresholds are actually logically equivalent (check this).
Line 8: Line 10:
 
* "missing gear" is in the middle in terms of generality because it says the final piece is a "gear", so some structurally distinct thing? -- I think "missing gear" is also somewhat ambiguous about whether the final piece is big vs small
 
* "missing gear" is in the middle in terms of generality because it says the final piece is a "gear", so some structurally distinct thing? -- I think "missing gear" is also somewhat ambiguous about whether the final piece is big vs small
 
* "payoff thresholds" is the most general because it makes no assumption about the nature of the final piece, just that there is some imaginary line, and once you cross it something happens
 
* "payoff thresholds" is the most general because it makes no assumption about the nature of the final piece, just that there is some imaginary line, and once you cross it something happens
 +
* minor design tweaks
  
 
(Another different but related possibility is that someone team suddenly realizes a particular use case of AI in doing AI research. e.g. maybe lots of people have IQ 35 AIs, but one project realizes you can use them to speed up some particular part of research (before the other projects do) and suddenly they get a huge lead.)
 
(Another different but related possibility is that someone team suddenly realizes a particular use case of AI in doing AI research. e.g. maybe lots of people have IQ 35 AIs, but one project realizes you can use them to speed up some particular part of research (before the other projects do) and suddenly they get a huge lead.)
Line 16: Line 19:
  
 
* In IEM [[Eliezer]] writes "If the nearest competitor was previously only seven days behind, these seven days have now been amplified into a technological gulf enabling the leading AI to shut down, sandbox, or restrict the growth of any competitors it wishes to fetter."<ref>https://intelligence.org/files/IEM.pdf#page=71</ref> The idea that a seven-day lead can result in a local foom makes me think Eliezer does not require the final missing gear to be a huge conceptual breakthrough.
 
* In IEM [[Eliezer]] writes "If the nearest competitor was previously only seven days behind, these seven days have now been amplified into a technological gulf enabling the leading AI to shut down, sandbox, or restrict the growth of any competitors it wishes to fetter."<ref>https://intelligence.org/files/IEM.pdf#page=71</ref> The idea that a seven-day lead can result in a local foom makes me think Eliezer does not require the final missing gear to be a huge conceptual breakthrough.
* "But this conversation did get me thinking about the topic of culturally transmitted software that contributes to human general intelligence. That software can be an *important* gear even if it's an algorithmically shallow part of the overall machinery. Removing a few simple gears that are 2% of a machine's mass can reduce the machine's performance by way more than 2%. Feral children would be the case in point." "But as necessary as it may be to avoid feral children, this kind of shallow soft-software doesn't strike me as something that takes a long time to redevelop, compared to hard-software like the secrets of computational neuroscience." [https://www.facebook.com/yudkowsky/posts/10155616782514228] -- so in places like these, it makes it sound like Eliezer ''does'' expect a huge conceptual breakthrough near the end (right before AGI).
+
* "But this conversation did get me thinking about the topic of culturally transmitted software that contributes to human general intelligence. That software can be an *important* gear even if it's an algorithmically shallow part of the overall machinery. Removing a few simple gears that are 2% of a machine's mass can reduce the machine's performance by way more than 2%. Feral children would be the case in point." "But as necessary as it may be to avoid feral children, this kind of shallow soft-software doesn't strike me as something that takes a long time to redevelop, compared to hard-software like the secrets of computational neuroscience." [https://www.facebook.com/yudkowsky/posts/10155616782514228] -- In places like these, where he uses phrase like "hard-software like the secrets of computational neuroscience", it sounds like Eliezer ''does'' expect a huge conceptual breakthrough near the end (right before AGI).
 
* "Intelligence is mostly about architecture, or “knowledge” along the lines of knowing to look for causal structure (Bayes-net type stuff) in the environment; this kind of knowledge will usually be expressed procedurally as well as declaratively. Architecture is mostly about deep insights. This point has not yet been addressed (much) on Overcoming Bias, but Bayes nets can be considered as an archetypal example of “architecture” and “deep insight”. Also, ask yourself how lawful intelligence seemed to you before you started reading this blog, how lawful it seems to you now, then extrapolate outward from that." [https://www.greaterwrong.com/posts/z3kYdw54htktqt9Jb/what-i-think-if-not-why]
 
* "Intelligence is mostly about architecture, or “knowledge” along the lines of knowing to look for causal structure (Bayes-net type stuff) in the environment; this kind of knowledge will usually be expressed procedurally as well as declaratively. Architecture is mostly about deep insights. This point has not yet been addressed (much) on Overcoming Bias, but Bayes nets can be considered as an archetypal example of “architecture” and “deep insight”. Also, ask yourself how lawful intelligence seemed to you before you started reading this blog, how lawful it seems to you now, then extrapolate outward from that." [https://www.greaterwrong.com/posts/z3kYdw54htktqt9Jb/what-i-think-if-not-why]
 
* I think https://www.greaterwrong.com/posts/QrhAeKBkm2WsdRYao/searching-for-bayes-structure is also important for understanding Eliezer's view about what progress in AI looks like.
 
* I think https://www.greaterwrong.com/posts/QrhAeKBkm2WsdRYao/searching-for-bayes-structure is also important for understanding Eliezer's view about what progress in AI looks like.
 +
* Posts like https://www.facebook.com/yudkowsky/posts/10153914357214228 where he uses [[AlphaGo]] as evidence of discontinuities also make it seem like he expects/allows a discontinuity to happen without a huge breakthrough. (actually, maybe Eliezer would call AlphaGo a big architectural insight/breakthrough.)
 +
* "in an evolutionary trajectory, it can't ''literally'' be a "missing gear", the sort of discontinuity that follows from removing a gear that an otherwise functioning machine was built around.  So if you suppose that a final set of changes was enough to produce a sudden huge leap in effective intelligence, it does demand the question of what those changes were.  Something to do with reflection - the brain modeling or controlling itself - would be one obvious candidate.  Or perhaps a change in motivations (more curious individuals, using the brainpower they have in different directions) in which case you ''wouldn't'' expect that discontinuity to appear in the AI's development, but you would expect it to be more effective at earlier stages than humanity's evolutionary history would suggest...  But you could have whole journal issues about that one question, so I'm just going to leave it at that."<ref>https://lw2.issarice.com/posts/tjH8XPxAnr6JRbh7k/hard-takeoff</ref>
 +
* "Later on, there's an exciting result in a more interesting algorithm that operates on a more general level (I'm not being very specific here, for the same reason I don't talk about my ideas for building really great bioweapons)." [https://www.facebook.com/yudkowsky/posts/10154018209759228]
  
 
==Evidence==
 
==Evidence==
Line 26: Line 32:
 
* Variation in scientific ability among humans suggests minor tweaks can lead to big improvements in ability: [[Wei Dai]]: "The biological von Neumann's brain must have been architecturally very similar to a typical university professor's. Nor could it have contained independent improvements to lots of different modules. Given this, I speculate that obtaining the analogous improvement in AI intelligence may only require a few design tweaks, which a relatively small project could find first by being luckier or a bit more talented than everyone else." [http://www.overcomingbias.com/2014/07/30855.html#comment-1502991300]
 
* Variation in scientific ability among humans suggests minor tweaks can lead to big improvements in ability: [[Wei Dai]]: "The biological von Neumann's brain must have been architecturally very similar to a typical university professor's. Nor could it have contained independent improvements to lots of different modules. Given this, I speculate that obtaining the analogous improvement in AI intelligence may only require a few design tweaks, which a relatively small project could find first by being luckier or a bit more talented than everyone else." [http://www.overcomingbias.com/2014/07/30855.html#comment-1502991300]
 
** possibly relevant: https://www.gwern.net/Differences
 
** possibly relevant: https://www.gwern.net/Differences
 +
* Variation in cognitive ability among chimps: I think this is similar to variation among humans? The fact that [[Kanzi]] was much smarter than other chimps<ref>https://www.lesswrong.com/posts/YicoiQurNBxSp7a65/is-clickbait-destroying-our-general-intelligence?commentId=Cva4XBXsPcwjyFLAa</ref> suggests some kind of fine-tuning or "shallow" software can have some large effects on capability.
 +
* Comparing scientific ability in chimps vs humans suggests that some sort of "missing gear" was added which made human intelligence much more general. (This can also suggest a [[Secret sauce for intelligence#Evidence|secret sauce]], if the final piece was some big breakthrough.)
  
 
==External links==
 
==External links==
Line 35: Line 43:
  
 
* [[Missing gear vs secret sauce]]
 
* [[Missing gear vs secret sauce]]
 +
 +
==notes==
 +
 +
* Does AI impacts's [https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/#Payoff_thresholds payoff thresholds] section assume that the threshold can be determined in advance? (rather than that just there ''is'' such a threshold, but we don't know its location.)
  
 
==References==
 
==References==

Latest revision as of 21:42, 30 June 2020

Missing gear for intelligence (also called one wrong number problem, step function, understanding is discontinuous, payoff thresholds argument) is an argument for a discontinuity in AI takeoff. Unlike a secret sauce for intelligence, the missing gear argument does not require that the final part of AI development be a huge conceptual breakthrough; instead, the final piece is small but nevertheless results in a discontinuity (see Variants section below for details).

Variants

Another way of talking about missing gear is to talk about a discontinuity in the usefulness of the AI (i.e. "payoff thresholds"). e.g. at IQ 30 the AI is completely useless for AI research, but then at IQ 35 it's suddenly actually helpful (because it can finally automate some particular part of doing AI research). Then the first project that gets to that point can suddenly grow past everyone else. I'm not sure that missing gear and payoff thresholds are actually logically equivalent (check this).

actually, here's what I think now:

  • "understanding is discontinuous" is the least general (most specific) because it says the final piece is about understanding
  • "missing gear" is in the middle in terms of generality because it says the final piece is a "gear", so some structurally distinct thing? -- I think "missing gear" is also somewhat ambiguous about whether the final piece is big vs small
  • "payoff thresholds" is the most general because it makes no assumption about the nature of the final piece, just that there is some imaginary line, and once you cross it something happens
  • minor design tweaks

(Another different but related possibility is that someone team suddenly realizes a particular use case of AI in doing AI research. e.g. maybe lots of people have IQ 35 AIs, but one project realizes you can use them to speed up some particular part of research (before the other projects do) and suddenly they get a huge lead.)

Does Eliezer Yudkowsky believe in a missing gear?

I think Eliezer's writings are somewhat ambiguous. In some places he seems to clearly be saying that he expects intelligence to have a secret sauce, which would make a missing gear dynamic unnecessary. But in other places, he seems to expect instead a missing gear type dynamic.

  • In IEM Eliezer writes "If the nearest competitor was previously only seven days behind, these seven days have now been amplified into a technological gulf enabling the leading AI to shut down, sandbox, or restrict the growth of any competitors it wishes to fetter."[1] The idea that a seven-day lead can result in a local foom makes me think Eliezer does not require the final missing gear to be a huge conceptual breakthrough.
  • "But this conversation did get me thinking about the topic of culturally transmitted software that contributes to human general intelligence. That software can be an *important* gear even if it's an algorithmically shallow part of the overall machinery. Removing a few simple gears that are 2% of a machine's mass can reduce the machine's performance by way more than 2%. Feral children would be the case in point." "But as necessary as it may be to avoid feral children, this kind of shallow soft-software doesn't strike me as something that takes a long time to redevelop, compared to hard-software like the secrets of computational neuroscience." [1] -- In places like these, where he uses phrase like "hard-software like the secrets of computational neuroscience", it sounds like Eliezer does expect a huge conceptual breakthrough near the end (right before AGI).
  • "Intelligence is mostly about architecture, or “knowledge” along the lines of knowing to look for causal structure (Bayes-net type stuff) in the environment; this kind of knowledge will usually be expressed procedurally as well as declaratively. Architecture is mostly about deep insights. This point has not yet been addressed (much) on Overcoming Bias, but Bayes nets can be considered as an archetypal example of “architecture” and “deep insight”. Also, ask yourself how lawful intelligence seemed to you before you started reading this blog, how lawful it seems to you now, then extrapolate outward from that." [2]
  • I think https://www.greaterwrong.com/posts/QrhAeKBkm2WsdRYao/searching-for-bayes-structure is also important for understanding Eliezer's view about what progress in AI looks like.
  • Posts like https://www.facebook.com/yudkowsky/posts/10153914357214228 where he uses AlphaGo as evidence of discontinuities also make it seem like he expects/allows a discontinuity to happen without a huge breakthrough. (actually, maybe Eliezer would call AlphaGo a big architectural insight/breakthrough.)
  • "in an evolutionary trajectory, it can't literally be a "missing gear", the sort of discontinuity that follows from removing a gear that an otherwise functioning machine was built around. So if you suppose that a final set of changes was enough to produce a sudden huge leap in effective intelligence, it does demand the question of what those changes were. Something to do with reflection - the brain modeling or controlling itself - would be one obvious candidate. Or perhaps a change in motivations (more curious individuals, using the brainpower they have in different directions) in which case you wouldn't expect that discontinuity to appear in the AI's development, but you would expect it to be more effective at earlier stages than humanity's evolutionary history would suggest... But you could have whole journal issues about that one question, so I'm just going to leave it at that."[2]
  • "Later on, there's an exciting result in a more interesting algorithm that operates on a more general level (I'm not being very specific here, for the same reason I don't talk about my ideas for building really great bioweapons)." [3]

Evidence

What kinds of evidence would shift our beliefs to one side of the disagreement or the other? This section lists all the things that have been given as reasons for believing or not believing in a "missing gear" type dynamic.

  • Variation in scientific ability among humans suggests minor tweaks can lead to big improvements in ability: Wei Dai: "The biological von Neumann's brain must have been architecturally very similar to a typical university professor's. Nor could it have contained independent improvements to lots of different modules. Given this, I speculate that obtaining the analogous improvement in AI intelligence may only require a few design tweaks, which a relatively small project could find first by being luckier or a bit more talented than everyone else." [4]
  • Variation in cognitive ability among chimps: I think this is similar to variation among humans? The fact that Kanzi was much smarter than other chimps[3] suggests some kind of fine-tuning or "shallow" software can have some large effects on capability.
  • Comparing scientific ability in chimps vs humans suggests that some sort of "missing gear" was added which made human intelligence much more general. (This can also suggest a secret sauce, if the final piece was some big breakthrough.)

External links

See also

notes

  • Does AI impacts's payoff thresholds section assume that the threshold can be determined in advance? (rather than that just there is such a threshold, but we don't know its location.)

References