Difference between revisions of "Science argument"

From Issawiki
Jump to: navigation, search
Line 1: Line 1:
The '''science argument''' says that science is a general "[[Architecture|architectural]] insight" which allowed humans to have much more control over the world, and that we should expect something like that to happen with AI as well (that there is some sort of core insight that allows an AI to have much more control over the world, rather than it being a bunch of incremental progress). [https://docs.google.com/document/pub?id=17yLL7B7yRrhV3J9NuiVuac3hNmjeKTVHnqiEa6UQpJk] (search "you look at human civilization and there's this core trick called science") See (5) in [http://www.overcomingbias.com/2011/07/debating-yudkowsky.html] for [[Robin Hanson]]'s response.
+
The '''science argument''' is an argument for [[Secret sauce for intelligence|expecting a small number of breakthroughs for AGI]], which in turn supports a [[hard takeoff]]. The argument says that science is a general "[[Architecture|architectural]] insight" which allowed humans to have much more control over the world, and that we should expect something like that to happen with AI as well (that there is some sort of core insight that allows an AI to have much more control over the world, rather than it being a bunch of incremental progress). [https://docs.google.com/document/pub?id=17yLL7B7yRrhV3J9NuiVuac3hNmjeKTVHnqiEa6UQpJk] (search "you look at human civilization and there's this core trick called science") See (5) in [http://www.overcomingbias.com/2011/07/debating-yudkowsky.html] for [[Robin Hanson]]'s response.
  
 
[[Category:AI safety]]
 
[[Category:AI safety]]

Revision as of 09:12, 27 May 2020

The science argument is an argument for expecting a small number of breakthroughs for AGI, which in turn supports a hard takeoff. The argument says that science is a general "architectural insight" which allowed humans to have much more control over the world, and that we should expect something like that to happen with AI as well (that there is some sort of core insight that allows an AI to have much more control over the world, rather than it being a bunch of incremental progress). [1] (search "you look at human civilization and there's this core trick called science") See (5) in [2] for Robin Hanson's response.