Difference between revisions of "Comparison of AI takeoff scenarios"

From Issawiki
Jump to: navigation, search
Line 1: Line 1:
 
{| class="wikitable"
 
{| class="wikitable"
 
|-
 
|-
! Scenario !! Significant changes to the world prior to critical AI capability threshold being reached? !! Intelligence explosion? !! Decisive strategic advantage?
+
! Scenario !! Significant changes to the world prior to critical AI capability threshold being reached? !! Intelligence explosion? !! Decisive strategic advantage? / Unipolar outcome?
 
|-
 
|-
 
| [[Yudkowskian]] hard takeoff || No || Yes || Yes
 
| [[Yudkowskian]] hard takeoff || No || Yes || Yes

Revision as of 09:44, 22 February 2020

Scenario Significant changes to the world prior to critical AI capability threshold being reached? Intelligence explosion? Decisive strategic advantage? / Unipolar outcome?
Yudkowskian hard takeoff No Yes Yes
Paul's slow takeoff Yes Yes No
Daniel Kokotajlo Yes Yes Yes
Hansonian slow takeoff Yes No No
Eric Drexler's CAIS Yes No? I think he says something weird, like all of the AI systems together recursively self-improving at the all-AI-services-combined level, without a single agent-like AI that self-improves. No?