Difference between revisions of "Sudden emergence"

From Issawiki
Jump to: navigation, search
(Created page with "In AI takeoff, '''sudden emergence''' is the hypothesis that AI development will follow a trajectory of mostly inconsequential systems, followed by a sudden jump to extrem...")
 
 
(4 intermediate revisions by the same user not shown)
Line 1: Line 1:
In [[AI takeoff]], '''sudden emergence''' is the hypothesis that AI development will follow a trajectory of mostly inconsequential systems, followed by a sudden jump to extremely capable/[[Transformative AI|transformative]] systems. The term is used in contrast to the [[explosive aftermath]] hypothesis.
+
In [[AI takeoff]], '''sudden emergence''' (also sometimes called things like '''discontinuity ''to'' AGI''') is the hypothesis that AI development will follow a trajectory of mostly inconsequential systems, followed by a sudden jump to extremely capable/[[Transformative AI|transformative]] systems. Another way to phrase sudden emergence is as "a discontinuity ''to'' (rather than ''from'') AGI". The term is used in contrast to the [[explosive aftermath]] hypothesis, as it is often unclear which of the two someone means when saying "[[hard takeoff]]".
  
 
==History==
 
==History==
  
 
The term was coined by [[Ben Garfinkel]] in "[[On Classic Arguments for AI Discontinuities]]".<ref>https://docs.google.com/document/d/1lgcBauWyYk774gBwKn8P_h8_wL9vLLiWBr6JMmEd_-I/edit</ref>
 
The term was coined by [[Ben Garfinkel]] in "[[On Classic Arguments for AI Discontinuities]]".<ref>https://docs.google.com/document/d/1lgcBauWyYk774gBwKn8P_h8_wL9vLLiWBr6JMmEd_-I/edit</ref>
 +
 +
==See also==
 +
 +
* [[Will there be significant changes to the world prior to some critical AI capability threshold being reached?]]
  
 
==References==
 
==References==
  
 
<references/>
 
<references/>
 +
 +
==External links==
 +
 +
* https://ea.greaterwrong.com/posts/7gxtXrMeqw78ZZeY9/ama-or-discuss-my-80k-podcast-episode-ben-garfinkel-fhi/comment/u5wdMuKWr9DwubCrd
  
 
[[Category:AI safety]]
 
[[Category:AI safety]]

Latest revision as of 23:23, 25 May 2021

In AI takeoff, sudden emergence (also sometimes called things like discontinuity to AGI) is the hypothesis that AI development will follow a trajectory of mostly inconsequential systems, followed by a sudden jump to extremely capable/transformative systems. Another way to phrase sudden emergence is as "a discontinuity to (rather than from) AGI". The term is used in contrast to the explosive aftermath hypothesis, as it is often unclear which of the two someone means when saying "hard takeoff".

History

The term was coined by Ben Garfinkel in "On Classic Arguments for AI Discontinuities".[1]

See also

References

External links