Difference between revisions of "Laplace's rule of succession argument for AI timelines"
Line 2: | Line 2: | ||
see https://eaforum.issarice.com/posts/Ayu5im98u8FeMWoBZ/my-personal-cruxes-for-working-on-ai-safety#AI_timelines | see https://eaforum.issarice.com/posts/Ayu5im98u8FeMWoBZ/my-personal-cruxes-for-working-on-ai-safety#AI_timelines | ||
+ | |||
+ | [[Category:AI timelines arguments]] |
Revision as of 22:54, 31 March 2020
The Laplace's rule of succession argument for AI timelines uses Laplace's rule of succession to estimate when humans will create AGI. The estimate relies only on the number of years humans have spent trying to create an AGI (about 60 years) and the fact that humans still haven't created an AGI (i.e. in the formalism of Laplace's rule of succession, each outcome so far has been a failure).