Difference between revisions of "Comparison of AI takeoff scenarios"
(3 intermediate revisions by the same user not shown) | |||
Line 7: | Line 7: | ||
| [[Paul]]'s slow takeoff || Yes || Yes<ref group=notes>Paul: "Note: this is ''not'' a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go [https://sideways-view.com/2017/10/04/hyperbolic-growth/ along these lines]. So e.g. while I disagree with many of the claims and assumptions in [https://intelligence.org/files/IEM.pdf Intelligence Explosion Microeconomics], I don’t disagree with the central thesis or with most of the arguments." [https://sideways-view.com/2018/02/24/takeoff-speeds/]</ref> || No || No | | [[Paul]]'s slow takeoff || Yes || Yes<ref group=notes>Paul: "Note: this is ''not'' a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go [https://sideways-view.com/2017/10/04/hyperbolic-growth/ along these lines]. So e.g. while I disagree with many of the claims and assumptions in [https://intelligence.org/files/IEM.pdf Intelligence Explosion Microeconomics], I don’t disagree with the central thesis or with most of the arguments." [https://sideways-view.com/2018/02/24/takeoff-speeds/]</ref> || No || No | ||
|- | |- | ||
− | | [[Daniel Kokotajlo]] || Yes || Yes || No || Yes | + | | [[Daniel Kokotajlo]] (NOTE: this row needs to be split because Daniel has multiple scenarios in mind) || Yes || Yes || No || Yes |
|- | |- | ||
| [[Hansonian]] slow takeoff || Yes || No?<ref group=notes>In some places Hanson says things like "You may recall that I did not dispute that an AI based economy would grow faster than does our economy today. The issue is the relative rate of growth of one AI system, across a broad range of tasks, relative to the entire rest of the world at that time." [https://www.facebook.com/yudkowsky/posts/10155848910529228?comment_id=10155848990064228&reply_comment_id=10155849018834228] This sounds more like Paul's takeoff scenario. I'm not clear on how the Paul and Hanson scenarios differ. Also see [https://www.greaterwrong.com/posts/AfGmsjGPXN97kNp57/arguments-about-fast-takeoff#comment-ov4b6S2igwRZxXB8x this comment] by Paul: "I agree that some people talking about slow takeoff mean something stronger (e.g. “no singularity ever”), but I think that’s an unusual position inside our crowd (and even an unusual position amongst thoughtful ML researchers), and it’s not e.g. Robin’s view (who I take as a central example of a slow takeoff proponent)." So who ''are'' these slow takeoff proponents who don't believe in an intelligence explosion/singularity?</ref> || No || No | | [[Hansonian]] slow takeoff || Yes || No?<ref group=notes>In some places Hanson says things like "You may recall that I did not dispute that an AI based economy would grow faster than does our economy today. The issue is the relative rate of growth of one AI system, across a broad range of tasks, relative to the entire rest of the world at that time." [https://www.facebook.com/yudkowsky/posts/10155848910529228?comment_id=10155848990064228&reply_comment_id=10155849018834228] This sounds more like Paul's takeoff scenario. I'm not clear on how the Paul and Hanson scenarios differ. Also see [https://www.greaterwrong.com/posts/AfGmsjGPXN97kNp57/arguments-about-fast-takeoff#comment-ov4b6S2igwRZxXB8x this comment] by Paul: "I agree that some people talking about slow takeoff mean something stronger (e.g. “no singularity ever”), but I think that’s an unusual position inside our crowd (and even an unusual position amongst thoughtful ML researchers), and it’s not e.g. Robin’s view (who I take as a central example of a slow takeoff proponent)." So who ''are'' these slow takeoff proponents who don't believe in an intelligence explosion/singularity?</ref> || No || No | ||
Line 19: | Line 19: | ||
How are Robin's and Paul's views different? Does Robin's takeoff scenario just have extra/unnecessary parts (like massive modularity, architecture changes don't produce large advantages)? Or is there an actual difference in predictions? | How are Robin's and Paul's views different? Does Robin's takeoff scenario just have extra/unnecessary parts (like massive modularity, architecture changes don't produce large advantages)? Or is there an actual difference in predictions? | ||
+ | |||
+ | "I'm inclined to agree that some aspects of Robin's forecasts haven't fared well over the last few years (e.g. human object-level knowledge doesn't look like it's going to be very valuable), but I don't think anything is challenging the basic economics." [https://www.facebook.com/yudkowsky/posts/10155848910529228?comment_id=10155849456079228] | ||
+ | |||
+ | There are some other comparison axes for different views on takeoff (wall clock time, GDP trend extrapolation [https://www.lesswrong.com/posts/YgNYA6pj2hPSDQiTE/distinguishing-definitions-of-takeoff?commentId=zyPxKXsX8ztEe6cuh]) but I actually don't think this is where the real disagreement is. | ||
==See also== | ==See also== | ||
Line 27: | Line 31: | ||
<references group=notes/> | <references group=notes/> | ||
+ | |||
+ | [[Category:AI safety]] |
Latest revision as of 00:50, 5 March 2021
Scenario | Significant changes to the world prior to critical AI capability threshold being reached? | Intelligence explosion? | Discontinuity in AI development? (i.e. one project suddenly has much more advanced AI than everyone else in the world combined) | Decisive strategic advantage? / Unipolar outcome? (i.e. not distributed) Can a single AI project get massively ahead (either by investing way more effort into building AGI or by converting a small lead into a large lead)? |
---|---|---|---|---|
Yudkowskian hard takeoff | No | Yes | Yes | Yes |
Paul's slow takeoff | Yes | Yes[notes 1] | No | No |
Daniel Kokotajlo (NOTE: this row needs to be split because Daniel has multiple scenarios in mind) | Yes | Yes | No | Yes |
Hansonian slow takeoff | Yes | No?[notes 2] | No | No |
Eric Drexler's CAIS | Yes | No? I think he says something weird, like all of the AI systems together recursively self-improving at the all-AI-services-combined level, without a single agent-like AI that self-improves. | No | No? |
DSA without discontinuity is a little weird to think about/unstable situation i think: if you achieve DSA without a large lead in AI capabilities, well then you can probably achieve a discontinuity soon afterwards anyway. Maybe the questions should be "DSA?" and "If DSA, achieved via discontinuity?"
This comment suggests more columns for the table: https://www.greaterwrong.com/posts/AfGmsjGPXN97kNp57/arguments-about-fast-takeoff#comment-JEkP5AmXmi4dHHpqo
How are Robin's and Paul's views different? Does Robin's takeoff scenario just have extra/unnecessary parts (like massive modularity, architecture changes don't produce large advantages)? Or is there an actual difference in predictions?
"I'm inclined to agree that some aspects of Robin's forecasts haven't fared well over the last few years (e.g. human object-level knowledge doesn't look like it's going to be very valuable), but I don't think anything is challenging the basic economics." [3]
There are some other comparison axes for different views on takeoff (wall clock time, GDP trend extrapolation [4]) but I actually don't think this is where the real disagreement is.
See also
Notes
- ↑ Paul: "Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments." [1]
- ↑ In some places Hanson says things like "You may recall that I did not dispute that an AI based economy would grow faster than does our economy today. The issue is the relative rate of growth of one AI system, across a broad range of tasks, relative to the entire rest of the world at that time." [2] This sounds more like Paul's takeoff scenario. I'm not clear on how the Paul and Hanson scenarios differ. Also see this comment by Paul: "I agree that some people talking about slow takeoff mean something stronger (e.g. “no singularity ever”), but I think that’s an unusual position inside our crowd (and even an unusual position amongst thoughtful ML researchers), and it’s not e.g. Robin’s view (who I take as a central example of a slow takeoff proponent)." So who are these slow takeoff proponents who don't believe in an intelligence explosion/singularity?