Laplace's rule of succession argument for AI timelines

From Issawiki
Revision as of 01:51, 5 April 2021 by Issa (talk | contribs)
Jump to: navigation, search

The Laplace's rule of succession argument for AI timelines uses Laplace's rule of succession to estimate when humans will create AGI. The estimate relies only on the number of years humans have spent trying to create an AGI (about 60 years) and the fact that humans still haven't created an AGI (i.e. in the formalism of Laplace's rule of succession, each outcome so far has been a failure).

see https://eaforum.issarice.com/posts/Ayu5im98u8FeMWoBZ/my-personal-cruxes-for-working-on-ai-safety#AI_timelines

Tom Davidson argues against using Laplace's rule in his semi-informative priors report.[1] I don't quite agree with Davidson's reasoning here. If we're trying to estimate how long a building would stay standing, it seems like we should use evidence from all other buildings we know about. If we had never seen a building before, then the naive reasoning he uses would be wrong but still reasonable (in the sense that with so little data, everything else would get it wrong as well).

References

  1. https://www.openphilanthropy.org/blog/report-semi-informative-priors Search for "I’m skeptical about simple priors." and "I’m suspicious of Laplace’s rule, an example of an uninformative prior."