Difference between revisions of "Scaling hypothesis"
(→See also) |
|||
Line 1: | Line 1: | ||
− | The '''scaling hypothesis''' is one of several statements stating roughly that once we find some "scalable" architecture, throwing more computing power and data at a problem | + | The '''scaling hypothesis''' is one of several statements stating roughly that once we find some "scalable" architecture, we can get to AGI by throwing more computing power and data at a problem. |
==See also== | ==See also== |
Revision as of 03:30, 24 February 2021
The scaling hypothesis is one of several statements stating roughly that once we find some "scalable" architecture, we can get to AGI by throwing more computing power and data at a problem.
See also
- Prosaic AI -- how is the scaling hypothesis different from prosaic AI? I think the scaling hypothesis implies prosaic AI, but a prosaic AI can make use of lots of different algorithms?
- Resource overhang
External links
- https://www.gwern.net/newsletter/2020/05#scaling-hypothesis
- https://www.greaterwrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang/comment/jbD8siv7GMWxRro43
- https://www.greaterwrong.com/posts/kpK6854ArgwySuv7D/probability-that-other-architectures-will-scale-as-well-as/answer/PEqsLsDswcRNKRhje