Laplace's rule of succession argument for AI timelines

From Issawiki
Jump to: navigation, search

The Laplace's rule of succession argument for AI timelines uses Laplace's rule of succession to estimate when humans will create AGI. The estimate relies only on the number of years humans have spent trying to create an AGI (about 60 years) and the fact that humans still haven't created an AGI (i.e. in the formalism of Laplace's rule of succession, each outcome so far has been a failure).

see https://eaforum.issarice.com/posts/Ayu5im98u8FeMWoBZ/my-personal-cruxes-for-working-on-ai-safety#AI_timelines

Tom Davidson argues against using Laplace's rule in his semi-informative priors report.[1] I don't quite agree with Davidson's reasoning here. If we're trying to estimate how long a building would stay standing, it seems like we should use evidence from all other buildings we know about. If we had never seen a building before, then the naive reasoning he uses would be wrong but still reasonable (in the sense that with so little data, everything else would get it wrong as well). There's more in this section.

References

  1. https://www.openphilanthropy.org/blog/report-semi-informative-priors Search for "I’m skeptical about simple priors." and "I’m suspicious of Laplace’s rule, an example of an uninformative prior."