Comparison of AI takeoff scenarios

From Issawiki
Revision as of 09:41, 22 February 2020 by Issa (talk | contribs)
Jump to: navigation, search
Scenario Significant changes to the world prior to critical AI capability threshold being reached? Intelligence explosion? Decisive strategic advantage?
Yudkowskian hard takeoff No Yes Yes
Paul's slow takeoff Yes Yes No
Daniel Kokotajlo Yes Yes Yes
Hansonian slow takeoff Yes No No
Eric Drexler's CAIS Yes ? doesn't he says something weird, like all of the AI systems together recursively self-improving, without a single agent-like AI that self-improves? No?