Difference between revisions of "Science argument"

From Issawiki
Jump to: navigation, search
 
Line 1: Line 1:
In debates about [[AI takeoff]], the '''science argument''' is an argument for [[Secret sauce for intelligence|expecting a small number of breakthroughs for AGI]], which in turn supports a [[hard takeoff]]. The argument states that the scientific method is a general "[[Architecture|architectural]] insight" which allowed humans to suddenly have much more control over the world, and that we should expect something like that to happen with AI as well: that there is some sort of core insight that allows an AI to suddenly have much more control over the world, rather than it being a bunch of incremental progress. [https://docs.google.com/document/pub?id=17yLL7B7yRrhV3J9NuiVuac3hNmjeKTVHnqiEa6UQpJk] (search "you look at human civilization and there's this core trick called science") See (5) in [http://www.overcomingbias.com/2011/07/debating-yudkowsky.html] for [[Robin Hanson]]'s response.
+
In debates about [[AI takeoff]], the '''science argument''' is an argument for [[Secret sauce for intelligence|expecting a small number of breakthroughs for AGI]], which in turn supports a [[hard takeoff]]. The argument states that the scientific method is a general "[[Architecture|architectural]] insight" which allowed humans to suddenly have much more control over the world, and that we should expect something like that to happen with AI as well: that there is some sort of core insight that allows an AI to suddenly have much more control over the world, rather than gaining capability through a bunch of incremental progress. [https://docs.google.com/document/pub?id=17yLL7B7yRrhV3J9NuiVuac3hNmjeKTVHnqiEa6UQpJk] (search "you look at human civilization and there's this core trick called science") See (5) in [http://www.overcomingbias.com/2011/07/debating-yudkowsky.html] for [[Robin Hanson]]'s response.
  
 
==History==
 
==History==

Latest revision as of 07:09, 15 June 2021

In debates about AI takeoff, the science argument is an argument for expecting a small number of breakthroughs for AGI, which in turn supports a hard takeoff. The argument states that the scientific method is a general "architectural insight" which allowed humans to suddenly have much more control over the world, and that we should expect something like that to happen with AI as well: that there is some sort of core insight that allows an AI to suddenly have much more control over the world, rather than gaining capability through a bunch of incremental progress. [1] (search "you look at human civilization and there's this core trick called science") See (5) in [2] for Robin Hanson's response.

History

The first instance of the argument found so far is by Eliezer Yudkowsky during the Jane Street debate with Robin Hanson (2011).