Difference between revisions of "Missing gear vs secret sauce"
Line 31: | Line 31: | ||
| Small number of breakthroughs needed for AGI || | | Small number of breakthroughs needed for AGI || | ||
|- | |- | ||
− | | Good consequentialist reasoning has low Kolmogorov complexity || | + | | Good consequentialist reasoning has low Kolmogorov complexity<ref>https://agentfoundations.org/item?id=1228</ref> || |
|} | |} | ||
+ | |||
+ | ==References== | ||
+ | |||
+ | <references/> | ||
[[Category:AI safety]] | [[Category:AI safety]] |
Revision as of 07:09, 9 June 2020
I want to distinguish between the following two framings:
- missing gear/one wrong number problem/step function/understanding is discontinuous/payoff thresholds: "missing gear" doesn't imply that the last piece added is all that significant -- it just says that adding it caused a huge jump in capabilities.
- secret sauce for intelligence/small number of breakthroughs: "small number of breakthroughs" says that the last added piece must have been a significant piece (which is what a breakthrough is).
I'm not sure how different these two actually are. But when thinking about discontinuities, I've noticed that I am somewhat inconsistent about conflating these two and distinctly visualizing them.
Term | Is the final piece a big breakthrough? | Nature of final piece | Explanation |
---|---|---|---|
Missing gear | |||
Secret sauce | |||
One wrong number function | |||
Step function | |||
Understanding is discontinuous | |||
Payoff thresholds | |||
One algorithm | |||
Lumpy AI progress | |||
Intelligibility of intelligence | |||
Simple core algorithm | |||
Small number of breakthroughs needed for AGI | |||
Good consequentialist reasoning has low Kolmogorov complexity[1] |