Comparison of AI takeoff scenarios

From Issawiki
Revision as of 03:44, 24 February 2020 by Issa (talk | contribs)
Jump to: navigation, search
Scenario Significant changes to the world prior to critical AI capability threshold being reached? Intelligence explosion? Decisive strategic advantage? / Unipolar outcome? (i.e. not distributed) Can a single AI project get massively ahead (either by investing way more effort into building AGI or by converting a small lead into a large lead)?
Yudkowskian hard takeoff No Yes Yes
Paul's slow takeoff Yes Yes[notes 1] No
Daniel Kokotajlo Yes Yes Yes
Hansonian slow takeoff Yes No?[notes 2] No
Eric Drexler's CAIS Yes No? I think he says something weird, like all of the AI systems together recursively self-improving at the all-AI-services-combined level, without a single agent-like AI that self-improves. No?

See also

Notes

  1. Paul: "Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments." [1]
  2. In some places Hanson says things like "You may recall that I did not dispute that an AI based economy would grow faster than does our economy today. The issue is the relative rate of growth of one AI system, across a broad range of tasks, relative to the entire rest of the world at that time." [2] This sounds more like Paul's takeoff scenario. I'm not clear on how the Paul and Hanson scenarios differ.