Difference between revisions of "Scaling hypothesis"
(→External links) |
|||
Line 11: | Line 11: | ||
* https://www.greaterwrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang/comment/jbD8siv7GMWxRro43 | * https://www.greaterwrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang/comment/jbD8siv7GMWxRro43 | ||
* https://www.greaterwrong.com/posts/kpK6854ArgwySuv7D/probability-that-other-architectures-will-scale-as-well-as/answer/PEqsLsDswcRNKRhje | * https://www.greaterwrong.com/posts/kpK6854ArgwySuv7D/probability-that-other-architectures-will-scale-as-well-as/answer/PEqsLsDswcRNKRhje | ||
+ | * https://www.lesswrong.com/posts/cxuzALcmucCndYv4a/daniel-kokotajlo-s-shortform?commentId=Xy4oiz6GpFg2hPQxn | ||
[[Category:AI safety]] | [[Category:AI safety]] |
Latest revision as of 00:45, 12 March 2021
The scaling hypothesis is one of several statements stating roughly that once we find some "scalable" architecture, we can get to AGI by throwing more computing power and data at a problem.
See also
- Prosaic AI -- how is the scaling hypothesis different from prosaic AI? I think the scaling hypothesis implies prosaic AI, but a prosaic AI can make use of lots of different algorithms?
- Resource overhang
External links
- https://www.gwern.net/newsletter/2020/05#scaling-hypothesis
- https://www.greaterwrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang/comment/jbD8siv7GMWxRro43
- https://www.greaterwrong.com/posts/kpK6854ArgwySuv7D/probability-that-other-architectures-will-scale-as-well-as/answer/PEqsLsDswcRNKRhje
- https://www.lesswrong.com/posts/cxuzALcmucCndYv4a/daniel-kokotajlo-s-shortform?commentId=Xy4oiz6GpFg2hPQxn