Difference between revisions of "Comparison of AI takeoff scenarios"
Line 9: | Line 9: | ||
| [[Daniel Kokotajlo]] || Yes || Yes || No || Yes | | [[Daniel Kokotajlo]] || Yes || Yes || No || Yes | ||
|- | |- | ||
− | | [[Hansonian]] slow takeoff || Yes || No?<ref group=notes>In some places Hanson says things like "You may recall that I did not dispute that an AI based economy would grow faster than does our economy today. The issue is the relative rate of growth of one AI system, across a broad range of tasks, relative to the entire rest of the world at that time." [https://www.facebook.com/yudkowsky/posts/10155848910529228?comment_id=10155848990064228&reply_comment_id=10155849018834228] This sounds more like Paul's takeoff scenario. I'm not clear on how the Paul and Hanson scenarios differ.</ref> || No || No | + | | [[Hansonian]] slow takeoff || Yes || No?<ref group=notes>In some places Hanson says things like "You may recall that I did not dispute that an AI based economy would grow faster than does our economy today. The issue is the relative rate of growth of one AI system, across a broad range of tasks, relative to the entire rest of the world at that time." [https://www.facebook.com/yudkowsky/posts/10155848910529228?comment_id=10155848990064228&reply_comment_id=10155849018834228] This sounds more like Paul's takeoff scenario. I'm not clear on how the Paul and Hanson scenarios differ. Also see [https://www.greaterwrong.com/posts/AfGmsjGPXN97kNp57/arguments-about-fast-takeoff#comment-ov4b6S2igwRZxXB8x this comment] by Paul: "I agree that some people talking about slow takeoff mean something stronger (e.g. “no singularity ever”), but I think that’s an unusual position inside our crowd (and even an unusual position amongst thoughtful ML researchers), and it’s not e.g. Robin’s view (who I take as a central example of a slow takeoff proponent)." So who ''are'' these slow takeoff proponents who don't believe in an intelligence explosion/singularity?</ref> || No || No |
|- | |- | ||
| [[Eric Drexler]]'s CAIS || Yes || No? I think he says something weird, like all of the AI systems together recursively self-improving at the all-AI-services-combined level, without a single agent-like AI that self-improves. || No || No? | | [[Eric Drexler]]'s CAIS || Yes || No? I think he says something weird, like all of the AI systems together recursively self-improving at the all-AI-services-combined level, without a single agent-like AI that self-improves. || No || No? |
Revision as of 05:31, 24 February 2020
Scenario | Significant changes to the world prior to critical AI capability threshold being reached? | Intelligence explosion? | Discontinuity in AI development? (i.e. one project suddenly has much more advanced AI than everyone else in the world combined) | Decisive strategic advantage? / Unipolar outcome? (i.e. not distributed) Can a single AI project get massively ahead (either by investing way more effort into building AGI or by converting a small lead into a large lead)? |
---|---|---|---|---|
Yudkowskian hard takeoff | No | Yes | Yes | Yes |
Paul's slow takeoff | Yes | Yes[notes 1] | No | No |
Daniel Kokotajlo | Yes | Yes | No | Yes |
Hansonian slow takeoff | Yes | No?[notes 2] | No | No |
Eric Drexler's CAIS | Yes | No? I think he says something weird, like all of the AI systems together recursively self-improving at the all-AI-services-combined level, without a single agent-like AI that self-improves. | No | No? |
DSA without discontinuity is a little weird to think about/unstable situation i think: if you achieve DSA without a large lead in AI capabilities, well then you can probably achieve a discontinuity soon afterwards anyway. Maybe the questions should be "DSA?" and "If DSA, achieved via discontinuity?"
See also
Notes
- ↑ Paul: "Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments." [1]
- ↑ In some places Hanson says things like "You may recall that I did not dispute that an AI based economy would grow faster than does our economy today. The issue is the relative rate of growth of one AI system, across a broad range of tasks, relative to the entire rest of the world at that time." [2] This sounds more like Paul's takeoff scenario. I'm not clear on how the Paul and Hanson scenarios differ. Also see this comment by Paul: "I agree that some people talking about slow takeoff mean something stronger (e.g. “no singularity ever”), but I think that’s an unusual position inside our crowd (and even an unusual position amongst thoughtful ML researchers), and it’s not e.g. Robin’s view (who I take as a central example of a slow takeoff proponent)." So who are these slow takeoff proponents who don't believe in an intelligence explosion/singularity?