Difference between revisions of "Sudden emergence"
Line 1: | Line 1: | ||
− | In [[AI takeoff]], '''sudden emergence''' (also sometimes called things like '''discontinuity ''to'' AGI''') is the hypothesis that AI development will follow a trajectory of mostly inconsequential systems, followed by a sudden jump to extremely capable/[[Transformative AI|transformative]] systems. Another way to phrase sudden emergence is as "a discontinuity ''to'' (rather than ''from'') AGI". The term is used in contrast to the [[explosive aftermath]] hypothesis. | + | In [[AI takeoff]], '''sudden emergence''' (also sometimes called things like '''discontinuity ''to'' AGI''') is the hypothesis that AI development will follow a trajectory of mostly inconsequential systems, followed by a sudden jump to extremely capable/[[Transformative AI|transformative]] systems. Another way to phrase sudden emergence is as "a discontinuity ''to'' (rather than ''from'') AGI". The term is used in contrast to the [[explosive aftermath]] hypothesis, as it is often unclear which of the two someone means when saying "[[hard takeoff]]".. |
==History== | ==History== |
Revision as of 23:22, 25 May 2021
In AI takeoff, sudden emergence (also sometimes called things like discontinuity to AGI) is the hypothesis that AI development will follow a trajectory of mostly inconsequential systems, followed by a sudden jump to extremely capable/transformative systems. Another way to phrase sudden emergence is as "a discontinuity to (rather than from) AGI". The term is used in contrast to the explosive aftermath hypothesis, as it is often unclear which of the two someone means when saying "hard takeoff"..
Contents
History
The term was coined by Ben Garfinkel in "On Classic Arguments for AI Discontinuities".[1]