Difference between revisions of "List of breakthroughs plausibly needed for AGI"

From Issawiki
Jump to: navigation, search
Line 1: Line 1:
* Looking at things like ''The MIT Encyclopedia of the Cognitive Sciences'' and Judea Pearl's work on causality and trying to estimate how many insights are required to build an AGI.<ref name="jane-street-debate">https://docs.google.com/document/pub?id=17yLL7B7yRrhV3J9NuiVuac3hNmjeKTVHnqiEa6UQpJk</ref>
+
Looking at things like ''The MIT Encyclopedia of the Cognitive Sciences'' and Judea Pearl's work on causality and trying to estimate how many insights are required to build an AGI.<ref name="jane-street-debate">https://docs.google.com/document/pub?id=17yLL7B7yRrhV3J9NuiVuac3hNmjeKTVHnqiEa6UQpJk</ref>
 +
 
 +
"Intelligence is mostly about architecture, or “knowledge” along the lines of knowing to look for causal structure (Bayes-net type stuff) in the environment; this kind of knowledge will usually be expressed procedurally as well as declaratively. Architecture is mostly about deep insights. This point has not yet been addressed (much) on Overcoming Bias, but Bayes nets can be considered as an archetypal example of “architecture” and “deep insight”. Also, ask yourself how lawful intelligence seemed to you before you started reading this blog, how lawful it seems to you now, then extrapolate outward from that." [https://www.greaterwrong.com/posts/z3kYdw54htktqt9Jb/what-i-think-if-not-why]
  
 
see https://www.lesswrong.com/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress#The_Conceptual_Arguments
 
see https://www.lesswrong.com/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress#The_Conceptual_Arguments

Revision as of 09:25, 9 June 2020

Looking at things like The MIT Encyclopedia of the Cognitive Sciences and Judea Pearl's work on causality and trying to estimate how many insights are required to build an AGI.[1]

"Intelligence is mostly about architecture, or “knowledge” along the lines of knowing to look for causal structure (Bayes-net type stuff) in the environment; this kind of knowledge will usually be expressed procedurally as well as declaratively. Architecture is mostly about deep insights. This point has not yet been addressed (much) on Overcoming Bias, but Bayes nets can be considered as an archetypal example of “architecture” and “deep insight”. Also, ask yourself how lawful intelligence seemed to you before you started reading this blog, how lawful it seems to you now, then extrapolate outward from that." [1]

see https://www.lesswrong.com/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress#The_Conceptual_Arguments

this isn't the same thing, but it's doing a similar sort of thing of asking "how will AGI be different from current ML systems?" http://www.foldl.me/2018/conceptual-issues-ai-safety-paradigmatic-gap/#potential-paradigmatic-changes-in-ai

References