Scaling hypothesis

From Issawiki
Jump to: navigation, search

The scaling hypothesis is one of several statements stating roughly that once we find some "scalable" architecture, we can get to AGI by throwing more computing power and data at a problem.

See also

  • Prosaic AI -- how is the scaling hypothesis different from prosaic AI? I think the scaling hypothesis implies prosaic AI, but a prosaic AI can make use of lots of different algorithms?
  • Resource overhang

External links