Difference between revisions of "Comparison of AI takeoff scenarios"
| Line 13: | Line 13: | ||
| [[Eric Drexler]]'s CAIS || Yes || No? I think he says something weird, like all of the AI systems together recursively self-improving at the all-AI-services-combined level, without a single agent-like AI that self-improves. || No? | | [[Eric Drexler]]'s CAIS || Yes || No? I think he says something weird, like all of the AI systems together recursively self-improving at the all-AI-services-combined level, without a single agent-like AI that self-improves. || No? | ||
|} | |} | ||
| + | |||
| + | ==See also== | ||
| + | |||
| + | * [[Will there be significant changes to the world prior to some critical AI capability threshold being reached?]] | ||
==Notes== | ==Notes== | ||
<references group=notes/> | <references group=notes/> | ||
Revision as of 02:32, 23 February 2020
| Scenario | Significant changes to the world prior to critical AI capability threshold being reached? | Intelligence explosion? | Decisive strategic advantage? / Unipolar outcome? |
|---|---|---|---|
| Yudkowskian hard takeoff | No | Yes | Yes |
| Paul's slow takeoff | Yes | Yes | No |
| Daniel Kokotajlo | Yes | Yes | Yes |
| Hansonian slow takeoff | Yes | No?[notes 1] | No |
| Eric Drexler's CAIS | Yes | No? I think he says something weird, like all of the AI systems together recursively self-improving at the all-AI-services-combined level, without a single agent-like AI that self-improves. | No? |
See also
Notes
- ↑ In some places Hanson says things like "You may recall that I did not dispute that an AI based economy would grow faster than does our economy today. The issue is the relative rate of growth of one AI system, across a broad range of tasks, relative to the entire rest of the world at that time." [1] This sounds more like Paul's takeoff scenario. I'm not clear on how the Paul and Hanson scenarios differ.