Science argument
The science argument is an argument for expecting a small number of breakthroughs for AGI, which in turn supports a hard takeoff. The argument says that science is a general "architectural insight" which allowed humans to have much more control over the world, and that we should expect something like that to happen with AI as well (that there is some sort of core insight that allows an AI to have much more control over the world, rather than it being a bunch of incremental progress). [1] (search "you look at human civilization and there's this core trick called science") See (5) in [2] for Robin Hanson's response.