Comparison of AI takeoff scenarios
Scenario | Significant changes to the world prior to critical AI capability threshold being reached? | Intelligence explosion? | Decisive strategic advantage? / Unipolar outcome? (i.e. not distributed) |
---|---|---|---|
Yudkowskian hard takeoff | No | Yes | Yes |
Paul's slow takeoff | Yes | Yes[notes 1] | No |
Daniel Kokotajlo | Yes | Yes | Yes |
Hansonian slow takeoff | Yes | No?[notes 2] | No |
Eric Drexler's CAIS | Yes | No? I think he says something weird, like all of the AI systems together recursively self-improving at the all-AI-services-combined level, without a single agent-like AI that self-improves. | No? |
See also
Notes
- ↑ Paul: "Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments." [1]
- ↑ In some places Hanson says things like "You may recall that I did not dispute that an AI based economy would grow faster than does our economy today. The issue is the relative rate of growth of one AI system, across a broad range of tasks, relative to the entire rest of the world at that time." [2] This sounds more like Paul's takeoff scenario. I'm not clear on how the Paul and Hanson scenarios differ.