Difference between revisions of "Laplace's rule of succession argument for AI timelines"
Line 1: | Line 1: | ||
− | The '''Laplace's rule of succession argument for AI timelines''' uses [[wikipedia:Rule of succession|Laplace's rule of succession]] to estimate when humans will create AGI. The estimate relies only on the number of years humans have spent trying to create an AGI (about 60 years) and the fact that humans still haven't created an AGI (i.e. in Laplace's rule of succession, each outcome so far has been a failure). | + | The '''Laplace's rule of succession argument for AI timelines''' uses [[wikipedia:Rule of succession|Laplace's rule of succession]] to estimate when humans will create AGI. The estimate relies only on the number of years humans have spent trying to create an AGI (about 60 years) and the fact that humans still haven't created an AGI (i.e. in the formalism of Laplace's rule of succession, each outcome so far has been a failure). |
see https://eaforum.issarice.com/posts/Ayu5im98u8FeMWoBZ/my-personal-cruxes-for-working-on-ai-safety#AI_timelines | see https://eaforum.issarice.com/posts/Ayu5im98u8FeMWoBZ/my-personal-cruxes-for-working-on-ai-safety#AI_timelines |
Revision as of 22:53, 31 March 2020
The Laplace's rule of succession argument for AI timelines uses Laplace's rule of succession to estimate when humans will create AGI. The estimate relies only on the number of years humans have spent trying to create an AGI (about 60 years) and the fact that humans still haven't created an AGI (i.e. in the formalism of Laplace's rule of succession, each outcome so far has been a failure).