Difference between revisions of "Comparison of AI takeoff scenarios"

From Issawiki
Jump to: navigation, search
Line 11: Line 11:
 
| [[Hansonian]] slow takeoff || Yes || No || No
 
| [[Hansonian]] slow takeoff || Yes || No || No
 
|-
 
|-
| [[Eric Drexler]]'s CAIS || Yes || ? || No?
+
| [[Eric Drexler]]'s CAIS || Yes || ? doesn't he says something weird, like all of the AI systems together recursively self-improving, without a single agent-like AI that self-improves? || No?
 
|}
 
|}

Revision as of 09:41, 22 February 2020

Scenario Significant changes to the world prior to critical AI capability threshold being reached? Intelligence explosion? Decisive strategic advantage?
Yudkowskian hard takeoff No Yes Yes
Paul's slow takeoff Yes Yes No
Daniel Kokotajlo Yes Yes Yes
Hansonian slow takeoff Yes No No
Eric Drexler's CAIS Yes ? doesn't he says something weird, like all of the AI systems together recursively self-improving, without a single agent-like AI that self-improves? No?