Difference between revisions of "Hardware argument for AI timelines"

From Issawiki
Jump to: navigation, search
Line 10: Line 10:
  
 
"A few years ago, I pretty frequently encountered the claim that recently developed AI systems exhibited roughly “insect-level intelligence.” This claim was typically used to support an argument for short timelines, since the claim was also made that we now had roughly insect-level compute. If insect-level intelligence has arrived around the same time as insect-level compute, then, it seems to follow, we shouldn’t be at all surprised if we get ‘human-level intelligence’ at roughly the point where we get human-level compute. And human-level compute might be achieved pretty soon. [...] Second, we know that there are previous of examples of smart people looking at AI behaviour and forming the impression that it suggests “insect-level intelligence.” For example, in Nick Bostrom’s paper “How Long Before Superintelligence?” (1998) he suggested that “approximately insect-level intelligence” was achieved sometime in the 70s, as a result of insect-level computing power being achieved in the 70s. In Moravec’s book Mind Children (1990), he also suggested that both insect-level intelligence and insect-level compute had both recently been achieved. Rodney Brooks also had this whole research program, in the 90s, that was based around going from “insect-level intelligence” to “human-level intelligence.”" https://forum.effectivealtruism.org/posts/wYpARcC4WqMsDEmYR/taboo-outside-view?commentId=iMjjLSqFr9eL5EiZF
 
"A few years ago, I pretty frequently encountered the claim that recently developed AI systems exhibited roughly “insect-level intelligence.” This claim was typically used to support an argument for short timelines, since the claim was also made that we now had roughly insect-level compute. If insect-level intelligence has arrived around the same time as insect-level compute, then, it seems to follow, we shouldn’t be at all surprised if we get ‘human-level intelligence’ at roughly the point where we get human-level compute. And human-level compute might be achieved pretty soon. [...] Second, we know that there are previous of examples of smart people looking at AI behaviour and forming the impression that it suggests “insect-level intelligence.” For example, in Nick Bostrom’s paper “How Long Before Superintelligence?” (1998) he suggested that “approximately insect-level intelligence” was achieved sometime in the 70s, as a result of insect-level computing power being achieved in the 70s. In Moravec’s book Mind Children (1990), he also suggested that both insect-level intelligence and insect-level compute had both recently been achieved. Rodney Brooks also had this whole research program, in the 90s, that was based around going from “insect-level intelligence” to “human-level intelligence.”" https://forum.effectivealtruism.org/posts/wYpARcC4WqMsDEmYR/taboo-outside-view?commentId=iMjjLSqFr9eL5EiZF
 +
 +
https://www.alignmentforum.org/posts/yW3Tct2iyBMzYhTw7/how-does-bee-learning-compare-with-machine-learning
  
 
[[Category:AI timelines arguments]]
 
[[Category:AI timelines arguments]]
 
[[Category:AI safety]]
 
[[Category:AI safety]]

Revision as of 00:13, 1 July 2021

In the context of AI timelines, the hardware argument is a common argument structure for estimating when AGI will be created.

see https://intelligence.org/files/SoftwareLimited.pdf and http://www.aleph.se/andart/archives/2010/10/why_early_singularities_are_softer.html and https://aiimpacts.org/how-ai-timelines-are-estimated/

"I think one intuition that some people have is if in some sense computing power is the main thing that drives AI progress, then at some point there’ll be some level of computing power such that when we have that level of computing power, we’ll just have AI systems that can at least, in aggregate, do all the stuff that people can do. If you’re trying to estimate when that point will be, maybe one thing you should do is make some sort of estimate of how much computing power the human brain uses and then notice the fact that the amount of computing power we use to train the ML systems isn’t that different and think, “Well, maybe if we have the amount of computing power that’s not much larger than what we have now, maybe that would be sufficient to train AI systems to do all the stuff that people can do”." https://80000hours.org/podcast/episodes/ben-garfinkel-classic-ai-risk-arguments/

Something I am confused about: we might not have human-level compute, but we should have, say, chimp-level compute or dog-level compute or mouse-level compute or ant-level compute. Do we have "chimp-level AGI", "dog-level AGI", "mouse-level AGI", or "ant-level AGI"? It seems like we should be able to settle how good the hardware argument is, by focusing on lesser compute levels and on lesser general intelligences. For example, suppose we got dog-level compute in the year 1990, and it's been 30 years but we still don't have dog-level AGI. That seems like evidence against the hardware argument (i.e. having access to a comparable-to-nature level of compute did not help in getting a comparable-to-nature level general intelligence). I haven't actually looked at the history of compute prices and level of general intelligence, so I can't say what's actually going on, but I find it somewhat odd that I don't remember seeing any discussion of this.

Related to the above, here's a method of estimating AI timelines: look at the lag between level of compute and level of general intelligence. If on average it takes 40 years between the time when X-level compute becomes cheap, to the point when X-level AGI is created, we can estimate the time when human-level AGI arrives by estimating the time when human-level compute becomes cheap, and then adding on this lag. If the lag has a non-constant pattern (e.g. maybe we get better at doing AI research, so the gap gets smaller the smarter the AI) then we can deal with that case too, by extrapolating the pattern.

"A few years ago, I pretty frequently encountered the claim that recently developed AI systems exhibited roughly “insect-level intelligence.” This claim was typically used to support an argument for short timelines, since the claim was also made that we now had roughly insect-level compute. If insect-level intelligence has arrived around the same time as insect-level compute, then, it seems to follow, we shouldn’t be at all surprised if we get ‘human-level intelligence’ at roughly the point where we get human-level compute. And human-level compute might be achieved pretty soon. [...] Second, we know that there are previous of examples of smart people looking at AI behaviour and forming the impression that it suggests “insect-level intelligence.” For example, in Nick Bostrom’s paper “How Long Before Superintelligence?” (1998) he suggested that “approximately insect-level intelligence” was achieved sometime in the 70s, as a result of insect-level computing power being achieved in the 70s. In Moravec’s book Mind Children (1990), he also suggested that both insect-level intelligence and insect-level compute had both recently been achieved. Rodney Brooks also had this whole research program, in the 90s, that was based around going from “insect-level intelligence” to “human-level intelligence.”" https://forum.effectivealtruism.org/posts/wYpARcC4WqMsDEmYR/taboo-outside-view?commentId=iMjjLSqFr9eL5EiZF

https://www.alignmentforum.org/posts/yW3Tct2iyBMzYhTw7/how-does-bee-learning-compare-with-machine-learning