Will there be significant changes to the world prior to some critical AI capability threshold being reached?

From Issawiki
Jump to: navigation, search

Will there be significant changes to the world prior to some critical AI capability threshold being reached? This is currently one of the questions I am most confused about (in particular, why MIRI/Eliezer hold the view that there won't be significant changes). I think Paul's position, that before AGI there will be almost-AGI, and therefore before FOOM there will be merely rapid change, is a reasonable default position to hold, and I still don't understand the MIRI position well enough to say there is a strong argument against this default position.

Eliezer: "And the relative rate of growth between AI capabilities and human capabilities, and the degree to which single investments in things like Tensor Processing Units or ResNet algorithms apply across a broad range of tasks, are both very relevant to that dispute." [1]

Rob Bensinger: 'But I also think it’s likely to be a discrete research target that doesn’t look like “a par-human surgeon, combined with a par-human chemist, combined with a par-human programmer, …” You just get all the capabilities at once, and on the path to hitting that threshold you might not get many useful precursor or spin-off technologies.' [2]

https://www.facebook.com/yudkowsky/posts/10154597918309228 -- this post seems to imply that Eliezer thinks this is a real possibility.

See also