Difference between revisions of "Laplace's rule of succession argument for AI timelines"
(Created page with "The '''Laplace's rule of succession argument for AI timelines''' uses Laplace's rule of succession to estimate when humans will create AGI. Th...") |
|||
(4 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | The '''Laplace's rule of succession argument for AI timelines''' uses [[wikipedia:Rule of succession|Laplace's rule of succession]] to estimate when humans will create AGI. The estimate relies only on the number of years humans have spent trying to create an | + | The '''Laplace's rule of succession argument for AI timelines''' uses [[wikipedia:Rule of succession|Laplace's rule of succession]] to estimate when humans will create AGI. The estimate relies only on the number of years humans have spent trying to create an AGI (about 60 years) and the fact that humans still haven't created an AGI (i.e. in the formalism of Laplace's rule of succession, each outcome so far has been a failure). |
see https://eaforum.issarice.com/posts/Ayu5im98u8FeMWoBZ/my-personal-cruxes-for-working-on-ai-safety#AI_timelines | see https://eaforum.issarice.com/posts/Ayu5im98u8FeMWoBZ/my-personal-cruxes-for-working-on-ai-safety#AI_timelines | ||
+ | |||
+ | [[Tom Davidson]] argues against using Laplace's rule in his semi-informative priors report.<ref>https://www.openphilanthropy.org/blog/report-semi-informative-priors Search for "I’m skeptical about simple priors." and "I’m suspicious of Laplace’s rule, an example of an uninformative prior."</ref> I don't quite agree with Davidson's reasoning here. If we're trying to estimate how long a building would stay standing, it seems like we should use evidence from all other buildings we know about. If we had never seen a building before, then the naive reasoning he uses would be wrong but still reasonable (in the sense that with so little data, everything else would get it wrong as well). There's more in [https://www.openphilanthropy.org/blog/report-semi-informative-priors#LinkingTheSunrise this section]. | ||
+ | |||
+ | ==References== | ||
+ | |||
+ | <references/> | ||
+ | |||
+ | [[Category:AI safety]] | ||
+ | [[Category:AI timelines arguments]] |
Latest revision as of 02:04, 5 April 2021
The Laplace's rule of succession argument for AI timelines uses Laplace's rule of succession to estimate when humans will create AGI. The estimate relies only on the number of years humans have spent trying to create an AGI (about 60 years) and the fact that humans still haven't created an AGI (i.e. in the formalism of Laplace's rule of succession, each outcome so far has been a failure).
Tom Davidson argues against using Laplace's rule in his semi-informative priors report.[1] I don't quite agree with Davidson's reasoning here. If we're trying to estimate how long a building would stay standing, it seems like we should use evidence from all other buildings we know about. If we had never seen a building before, then the naive reasoning he uses would be wrong but still reasonable (in the sense that with so little data, everything else would get it wrong as well). There's more in this section.
References
- ↑ https://www.openphilanthropy.org/blog/report-semi-informative-priors Search for "I’m skeptical about simple priors." and "I’m suspicious of Laplace’s rule, an example of an uninformative prior."