Difference between revisions of "Will there be significant changes to the world prior to some critical AI capability threshold being reached?"

From Issawiki
Jump to: navigation, search
(Created page with "'''Will there be significant changes to the world prior to some critical AI capability threshold being reached?''' This is currently one of the questions I am most confused ab...")
 
(2 intermediate revisions by the same user not shown)
Line 2: Line 2:
  
 
[[Eliezer]]: "And the relative rate of growth between AI capabilities and human capabilities, and the degree to which single investments in things like Tensor Processing Units or ResNet algorithms apply across a broad range of tasks, are both very relevant to that dispute." [https://www.facebook.com/yudkowsky/posts/10155848910529228?comment_id=10155848990064228&reply_comment_id=10155849024894228]
 
[[Eliezer]]: "And the relative rate of growth between AI capabilities and human capabilities, and the degree to which single investments in things like Tensor Processing Units or ResNet algorithms apply across a broad range of tasks, are both very relevant to that dispute." [https://www.facebook.com/yudkowsky/posts/10155848910529228?comment_id=10155848990064228&reply_comment_id=10155849024894228]
 +
 +
[[Rob Bensinger]]: 'But I also think it’s likely to be a discrete research target that doesn’t look like “a par-human surgeon, combined with a par-human chemist, combined with a par-human programmer, …” You just get all the capabilities at once, and on the path to hitting that threshold you might not get many useful precursor or spin-off technologies.' [https://www.greaterwrong.com/posts/D3NspiH2nhKA6B2PE/what-evidence-is-alphago-zero-re-agi-complexity#comment-awsEzHzgD5Rv2YGPo]
 +
 +
https://www.facebook.com/yudkowsky/posts/10154597918309228 -- this post seems to imply that Eliezer thinks this is a real possibility.
 +
 +
==See also==
 +
 +
* [[Soft-hard takeoff]]
 +
 +
[[Category:AI safety]]

Revision as of 01:33, 10 June 2020

Will there be significant changes to the world prior to some critical AI capability threshold being reached? This is currently one of the questions I am most confused about (in particular, why MIRI/Eliezer hold the view that there won't be significant changes). I think Paul's position, that before AGI there will be almost-AGI, and therefore before FOOM there will be merely rapid change, is a reasonable default position to hold, and I still don't understand the MIRI position well enough to say there is a strong argument against this default position.

Eliezer: "And the relative rate of growth between AI capabilities and human capabilities, and the degree to which single investments in things like Tensor Processing Units or ResNet algorithms apply across a broad range of tasks, are both very relevant to that dispute." [1]

Rob Bensinger: 'But I also think it’s likely to be a discrete research target that doesn’t look like “a par-human surgeon, combined with a par-human chemist, combined with a par-human programmer, …” You just get all the capabilities at once, and on the path to hitting that threshold you might not get many useful precursor or spin-off technologies.' [2]

https://www.facebook.com/yudkowsky/posts/10154597918309228 -- this post seems to imply that Eliezer thinks this is a real possibility.

See also