Difference between revisions of "Hardware argument for AI timelines"

From Issawiki
Jump to: navigation, search
Line 5: Line 5:
 
Something I am confused about: we might not have human-level compute, but we should have, say, chimp-level compute or dog-level compute or mouse-level compute or ant-level compute. Do we have "chimp-level AGI", "dog-level AGI", "mouse-level AGI", or "ant-level AGI"? It seems like we should be able to settle how good the hardware argument is, by focusing on lesser compute levels and on lesser general intelligences. For example, suppose we got dog-level compute in the year 1990, and it's been 30 years but we still don't have dog-level AGI. That seems like evidence against the hardware argument (i.e. having access to a comparable-to-nature level of compute did not help in getting a comparable-to-nature level general intelligence). I haven't actually looked at the history of compute prices and level of general intelligence, so I can't say what's actually going on, but I find it somewhat odd that I don't remember seeing any discussion of this.
 
Something I am confused about: we might not have human-level compute, but we should have, say, chimp-level compute or dog-level compute or mouse-level compute or ant-level compute. Do we have "chimp-level AGI", "dog-level AGI", "mouse-level AGI", or "ant-level AGI"? It seems like we should be able to settle how good the hardware argument is, by focusing on lesser compute levels and on lesser general intelligences. For example, suppose we got dog-level compute in the year 1990, and it's been 30 years but we still don't have dog-level AGI. That seems like evidence against the hardware argument (i.e. having access to a comparable-to-nature level of compute did not help in getting a comparable-to-nature level general intelligence). I haven't actually looked at the history of compute prices and level of general intelligence, so I can't say what's actually going on, but I find it somewhat odd that I don't remember seeing any discussion of this.
  
Related to the above, here's a method of estimating AI timelines: look at the lag between level of compute and level of general intelligence. If on average it takes 40 years between the time when X-level compute becomes cheap, to the point when X-level AGI is created, we can estimate the time when human-level AGI arrives by estimating the time when human-level compute becomes cheap, and then adding on this lag.
+
Related to the above, here's a method of estimating AI timelines: look at the lag between level of compute and level of general intelligence. If on average it takes 40 years between the time when X-level compute becomes cheap, to the point when X-level AGI is created, we can estimate the time when human-level AGI arrives by estimating the time when human-level compute becomes cheap, and then adding on this lag. If the lag has a non-constant pattern (e.g. maybe we get better at doing AI research, so the gap gets smaller the smarter the AI) then we can deal with that case too, by extrapolating the pattern.
  
 
[[Category:AI timelines arguments]]
 
[[Category:AI timelines arguments]]
 
[[Category:AI safety]]
 
[[Category:AI safety]]

Revision as of 01:20, 30 May 2020

The hardware argument is a common argument structure for estimating when AGI will be created.

see https://intelligence.org/files/SoftwareLimited.pdf and http://www.aleph.se/andart/archives/2010/10/why_early_singularities_are_softer.html and https://aiimpacts.org/how-ai-timelines-are-estimated/

Something I am confused about: we might not have human-level compute, but we should have, say, chimp-level compute or dog-level compute or mouse-level compute or ant-level compute. Do we have "chimp-level AGI", "dog-level AGI", "mouse-level AGI", or "ant-level AGI"? It seems like we should be able to settle how good the hardware argument is, by focusing on lesser compute levels and on lesser general intelligences. For example, suppose we got dog-level compute in the year 1990, and it's been 30 years but we still don't have dog-level AGI. That seems like evidence against the hardware argument (i.e. having access to a comparable-to-nature level of compute did not help in getting a comparable-to-nature level general intelligence). I haven't actually looked at the history of compute prices and level of general intelligence, so I can't say what's actually going on, but I find it somewhat odd that I don't remember seeing any discussion of this.

Related to the above, here's a method of estimating AI timelines: look at the lag between level of compute and level of general intelligence. If on average it takes 40 years between the time when X-level compute becomes cheap, to the point when X-level AGI is created, we can estimate the time when human-level AGI arrives by estimating the time when human-level compute becomes cheap, and then adding on this lag. If the lag has a non-constant pattern (e.g. maybe we get better at doing AI research, so the gap gets smaller the smarter the AI) then we can deal with that case too, by extrapolating the pattern.