Comparison of AI takeoff scenarios
Scenario | Significant changes to the world prior to critical AI capability threshold being reached? | Intelligence explosion? | Decisive strategic advantage? |
---|---|---|---|
Yudkowskian hard takeoff | No | Yes | Yes |
Paul's slow takeoff | Yes | Yes | No |
Daniel Kokotajlo | Yes | Yes | Yes |
Hansonian slow takeoff | Yes | No | No |
Eric Drexler's CAIS | Yes | No? I think he says something weird, like all of the AI systems together recursively self-improving at the all-AI-services-combined level, without a single agent-like AI that self-improves. | No? |