Difference between revisions of "Sudden emergence"
Line 1: | Line 1: | ||
− | In [[AI takeoff]], '''sudden emergence''' is the hypothesis that AI development will follow a trajectory of mostly inconsequential systems, followed by a sudden jump to extremely capable/[[Transformative AI|transformative]] systems. Another way to phrase sudden emergence is as "a discontinuity ''to'' (rather than ''from'') AGI". The term is used in contrast to the [[explosive aftermath]] hypothesis. | + | In [[AI takeoff]], '''sudden emergence''' (also sometimes called things like '''discontinuity ''to'' AGI''') is the hypothesis that AI development will follow a trajectory of mostly inconsequential systems, followed by a sudden jump to extremely capable/[[Transformative AI|transformative]] systems. Another way to phrase sudden emergence is as "a discontinuity ''to'' (rather than ''from'') AGI". The term is used in contrast to the [[explosive aftermath]] hypothesis. |
==History== | ==History== |
Revision as of 23:19, 25 May 2021
In AI takeoff, sudden emergence (also sometimes called things like discontinuity to AGI) is the hypothesis that AI development will follow a trajectory of mostly inconsequential systems, followed by a sudden jump to extremely capable/transformative systems. Another way to phrase sudden emergence is as "a discontinuity to (rather than from) AGI". The term is used in contrast to the explosive aftermath hypothesis.
Contents
[hide]History
The term was coined by Ben Garfinkel in "On Classic Arguments for AI Discontinuities".[1]