Difference between revisions of "Hardware argument for AI timelines"

From Issawiki
Jump to: navigation, search
(4 intermediate revisions by the same user not shown)
Line 1: Line 1:
The '''hardware argument''' is a common argument structure for estimating when AGI will be created.
+
In the context of [[AI timelines]], the '''hardware argument''' is a common argument structure for estimating when AGI will be created.
  
 
see https://intelligence.org/files/SoftwareLimited.pdf and http://www.aleph.se/andart/archives/2010/10/why_early_singularities_are_softer.html and https://aiimpacts.org/how-ai-timelines-are-estimated/
 
see https://intelligence.org/files/SoftwareLimited.pdf and http://www.aleph.se/andart/archives/2010/10/why_early_singularities_are_softer.html and https://aiimpacts.org/how-ai-timelines-are-estimated/
 +
 +
"I think one intuition that some people have is if in some sense computing power is the main thing that drives AI progress, then at some point there’ll be some level of computing power such that when we have that level of computing power, we’ll just have AI systems that can at least, in aggregate, do all the stuff that people can do. If you’re trying to estimate when that point will be, maybe one thing you should do is make some sort of estimate of how much computing power the human brain uses and then notice the fact that the amount of computing power we use to train the ML systems isn’t that different and think, “Well, maybe if we have the amount of computing power that’s not much larger than what we have now, maybe that would be sufficient to train AI systems to do all the stuff that people can do”." https://80000hours.org/podcast/episodes/ben-garfinkel-classic-ai-risk-arguments/
 +
 +
Something I am confused about: we might not have human-level compute, but we should have, say, chimp-level compute or dog-level compute or mouse-level compute or ant-level compute. Do we have "chimp-level AGI", "dog-level AGI", "mouse-level AGI", or "ant-level AGI"? It seems like we should be able to settle how good the hardware argument is, by focusing on lesser compute levels and on lesser general intelligences. For example, suppose we got dog-level compute in the year 1990, and it's been 30 years but we still don't have dog-level AGI. That seems like evidence against the hardware argument (i.e. having access to a comparable-to-nature level of compute did not help in getting a comparable-to-nature level general intelligence). I haven't actually looked at the history of compute prices and level of general intelligence, so I can't say what's actually going on, but I find it somewhat odd that I don't remember seeing any discussion of this.
 +
 +
Related to the above, here's a method of estimating AI timelines: look at the lag between level of compute and level of general intelligence. If on average it takes 40 years between the time when X-level compute becomes cheap, to the point when X-level AGI is created, we can estimate the time when human-level AGI arrives by estimating the time when human-level compute becomes cheap, and then adding on this lag. If the lag has a non-constant pattern (e.g. maybe we get better at doing AI research, so the gap gets smaller the smarter the AI) then we can deal with that case too, by extrapolating the pattern.
  
 
[[Category:AI timelines arguments]]
 
[[Category:AI timelines arguments]]
 +
[[Category:AI safety]]

Revision as of 21:09, 7 September 2020

In the context of AI timelines, the hardware argument is a common argument structure for estimating when AGI will be created.

see https://intelligence.org/files/SoftwareLimited.pdf and http://www.aleph.se/andart/archives/2010/10/why_early_singularities_are_softer.html and https://aiimpacts.org/how-ai-timelines-are-estimated/

"I think one intuition that some people have is if in some sense computing power is the main thing that drives AI progress, then at some point there’ll be some level of computing power such that when we have that level of computing power, we’ll just have AI systems that can at least, in aggregate, do all the stuff that people can do. If you’re trying to estimate when that point will be, maybe one thing you should do is make some sort of estimate of how much computing power the human brain uses and then notice the fact that the amount of computing power we use to train the ML systems isn’t that different and think, “Well, maybe if we have the amount of computing power that’s not much larger than what we have now, maybe that would be sufficient to train AI systems to do all the stuff that people can do”." https://80000hours.org/podcast/episodes/ben-garfinkel-classic-ai-risk-arguments/

Something I am confused about: we might not have human-level compute, but we should have, say, chimp-level compute or dog-level compute or mouse-level compute or ant-level compute. Do we have "chimp-level AGI", "dog-level AGI", "mouse-level AGI", or "ant-level AGI"? It seems like we should be able to settle how good the hardware argument is, by focusing on lesser compute levels and on lesser general intelligences. For example, suppose we got dog-level compute in the year 1990, and it's been 30 years but we still don't have dog-level AGI. That seems like evidence against the hardware argument (i.e. having access to a comparable-to-nature level of compute did not help in getting a comparable-to-nature level general intelligence). I haven't actually looked at the history of compute prices and level of general intelligence, so I can't say what's actually going on, but I find it somewhat odd that I don't remember seeing any discussion of this.

Related to the above, here's a method of estimating AI timelines: look at the lag between level of compute and level of general intelligence. If on average it takes 40 years between the time when X-level compute becomes cheap, to the point when X-level AGI is created, we can estimate the time when human-level AGI arrives by estimating the time when human-level compute becomes cheap, and then adding on this lag. If the lag has a non-constant pattern (e.g. maybe we get better at doing AI research, so the gap gets smaller the smarter the AI) then we can deal with that case too, by extrapolating the pattern.