Difference between revisions of "Comparison of AI takeoff scenarios"
Line 17: | Line 17: | ||
This comment suggests more columns for the table: https://www.greaterwrong.com/posts/AfGmsjGPXN97kNp57/arguments-about-fast-takeoff#comment-JEkP5AmXmi4dHHpqo | This comment suggests more columns for the table: https://www.greaterwrong.com/posts/AfGmsjGPXN97kNp57/arguments-about-fast-takeoff#comment-JEkP5AmXmi4dHHpqo | ||
+ | |||
+ | How are Robin's and Paul's views different? Does Robin's takeoff scenario just have extra/unnecessary parts (like massive modularity, architecture changes don't produce large advantages)? Or is there an actual difference in predictions? | ||
==See also== | ==See also== |
Revision as of 06:32, 19 March 2020
Scenario | Significant changes to the world prior to critical AI capability threshold being reached? | Intelligence explosion? | Discontinuity in AI development? (i.e. one project suddenly has much more advanced AI than everyone else in the world combined) | Decisive strategic advantage? / Unipolar outcome? (i.e. not distributed) Can a single AI project get massively ahead (either by investing way more effort into building AGI or by converting a small lead into a large lead)? |
---|---|---|---|---|
Yudkowskian hard takeoff | No | Yes | Yes | Yes |
Paul's slow takeoff | Yes | Yes[notes 1] | No | No |
Daniel Kokotajlo | Yes | Yes | No | Yes |
Hansonian slow takeoff | Yes | No?[notes 2] | No | No |
Eric Drexler's CAIS | Yes | No? I think he says something weird, like all of the AI systems together recursively self-improving at the all-AI-services-combined level, without a single agent-like AI that self-improves. | No | No? |
DSA without discontinuity is a little weird to think about/unstable situation i think: if you achieve DSA without a large lead in AI capabilities, well then you can probably achieve a discontinuity soon afterwards anyway. Maybe the questions should be "DSA?" and "If DSA, achieved via discontinuity?"
This comment suggests more columns for the table: https://www.greaterwrong.com/posts/AfGmsjGPXN97kNp57/arguments-about-fast-takeoff#comment-JEkP5AmXmi4dHHpqo
How are Robin's and Paul's views different? Does Robin's takeoff scenario just have extra/unnecessary parts (like massive modularity, architecture changes don't produce large advantages)? Or is there an actual difference in predictions?
See also
Notes
- ↑ Paul: "Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments." [1]
- ↑ In some places Hanson says things like "You may recall that I did not dispute that an AI based economy would grow faster than does our economy today. The issue is the relative rate of growth of one AI system, across a broad range of tasks, relative to the entire rest of the world at that time." [2] This sounds more like Paul's takeoff scenario. I'm not clear on how the Paul and Hanson scenarios differ. Also see this comment by Paul: "I agree that some people talking about slow takeoff mean something stronger (e.g. “no singularity ever”), but I think that’s an unusual position inside our crowd (and even an unusual position amongst thoughtful ML researchers), and it’s not e.g. Robin’s view (who I take as a central example of a slow takeoff proponent)." So who are these slow takeoff proponents who don't believe in an intelligence explosion/singularity?