Difference between revisions of "Secret sauce for intelligence"
Line 1: | Line 1: | ||
'''Secret sauce for intelligence''' (also known as '''one algorithm''',<ref>https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/#One_algorithm</ref> '''simple core algorithm''', '''[[Lumpiness|lumpy]] AI progress''', '''intelligibility of intelligence''',<ref>https://intelligence.org/files/HowIntelligible.pdf</ref> and many other phrases) states that intelligence is "simple" in some sense, maybe Kolmogorov complexity or maybe in the sense that there are a small number of discrete insights required to create an AGI rather than many small/messy insights. | '''Secret sauce for intelligence''' (also known as '''one algorithm''',<ref>https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/#One_algorithm</ref> '''simple core algorithm''', '''[[Lumpiness|lumpy]] AI progress''', '''intelligibility of intelligence''',<ref>https://intelligence.org/files/HowIntelligible.pdf</ref> and many other phrases) states that intelligence is "simple" in some sense, maybe Kolmogorov complexity or maybe in the sense that there are a small number of discrete insights required to create an AGI rather than many small/messy insights. | ||
− | To what extent is this the same debate as [[something like realism about rationality]]? | + | To what extent is this the same debate as [[something like realism about rationality]]? It seems like if intelligence/rationality is simple/, then there ''will'' be a secret sauce. But if it's not simple it could go either way (maybe there's a final essential "gear" that needs to be added to make things really work, in which case there is a secret sauce, or maybe everything just gradually improves and there's nothing like a "last gear"). |
https://sideways-view.com/2018/02/24/takeoff-speeds/ | https://sideways-view.com/2018/02/24/takeoff-speeds/ |
Revision as of 18:42, 29 April 2020
Secret sauce for intelligence (also known as one algorithm,[1] simple core algorithm, lumpy AI progress, intelligibility of intelligence,[2] and many other phrases) states that intelligence is "simple" in some sense, maybe Kolmogorov complexity or maybe in the sense that there are a small number of discrete insights required to create an AGI rather than many small/messy insights.
To what extent is this the same debate as something like realism about rationality? It seems like if intelligence/rationality is simple/, then there will be a secret sauce. But if it's not simple it could go either way (maybe there's a final essential "gear" that needs to be added to make things really work, in which case there is a secret sauce, or maybe everything just gradually improves and there's nothing like a "last gear").
https://sideways-view.com/2018/02/24/takeoff-speeds/
https://jacoblagerros.wordpress.com/2018/03/09/brains-and-backprop-a-key-timeline-crux/
https://web.archive.org/web/20200218080005/https://lw2.issarice.com/posts/4Q5s8qGyCtzfYtCZX/is-there-a-compute-efficient-algorithm-for-agency (i guess this one argues against)
http://benjaminrosshoffman.com/openai-makes-humanity-less-safe/#comment-128508
https://srconstantin.wordpress.com/2017/02/21/strong-ai-isnt-here-yet/
from https://arbital.com/p/general_intelligence/
An Artificial General Intelligence would have the same property; it could learn a tremendous variety of domains, including domains it had no inkling of when it was switched on.
More specific hypotheses about how general intelligence operates have been advanced at various points, but any corresponding attempts to define general intelligence that way, would be theory-laden. The pretheoretical phenomenon to be explained is the extraordinary variety of human achievements across many non-instinctual domains, compared to other animals.
[…]
To the extent one credits the existence of 'significantly more general than chimpanzee intelligence', it implies that there are common cognitive subproblems of the huge variety of problems that humans can (learn to) solve, despite the surface-level differences of those domains. Or at least, the way humans solve problems in those domains, the cognitive work we do must have deep commonalities across those domains. These commonalities may not be visible on an immediate surface inspection.
'But in general, the hypothesis of general intelligence seems like it should cash out as some version of: "There's some set of new cognitive algorithms, plus improvements to existing algorithms, plus bigger brains, plus other resources--we don't know how many things like this there are, but there's some set of things like that--which, when added to previously existing primate and hominid capabilities, created the ability to do better on a broad set of deep cognitive subproblems held in common across a very wide variety of humanly-approachable surface-level problems for learning and manipulating domains. And that's why humans do better on a huge variety of domains simultaneously, despite evolution having not preprogrammed us with new instinctual knowledge or algorithms for all those domains separately."' -- this doesn't really tell us which of those things it was that helped the most.
Rob Bensinger makes the same argument here: https://www.greaterwrong.com/posts/D3NspiH2nhKA6B2PE/what-evidence-is-alphago-zero-re-agi-complexity/comment/awsEzHzgD5Rv2YGPo
see also discussion at https://www.facebook.com/yudkowsky/posts/10154018209759228?comment_id=10154018937319228
"Yes, IF there are just one or two insights that can create a very general AGI which is far more capable than previous systems, and if that fact is unanticipated, then it might happen that a small team creates this AGI, and it stays better than other systems for a sufficient time to have a big differential effect. So as I've said our key dispute is about the lumpiness and number of key insights needed to create a general capable AGI." [1]
You might claim that once we have enough good simple tools, complexity will no longer be required. With enough simple tools (and some data to crunch), a few simple and relatively obvious combinations of those tools will be sufficient to perform most all tasks in the world economy at a human level. And thus the first team to find the last simple general tool needed might “foom” via having an enormous advantage over the entire rest of the world put together. At least if that one last tool were powerful enough. I disagree with this claim, but I agree that neither view can be easily and clearly proven wrong.[3]