Difference between revisions of "Missing gear vs secret sauce"
Line 7: | Line 7: | ||
{| class="sortable wikitable" | {| class="sortable wikitable" | ||
− | ! Term !! Is the final piece a big breakthrough? !! Nature of final piece !! Found by humans or found by AI? !! Length of lead time prior to final piece !! Explanation | + | ! Term !! Is the final piece a big breakthrough? !! Nature of final piece !! Found by humans or found by AI? !! Length of lead time prior to final piece !! Number of pieces !! Explanation |
|- | |- | ||
| Missing gear || | | Missing gear || |
Revision as of 07:13, 9 June 2020
I want to distinguish between the following two framings:
- missing gear/one wrong number problem/step function/understanding is discontinuous/payoff thresholds: "missing gear" doesn't imply that the last piece added is all that significant -- it just says that adding it caused a huge jump in capabilities.
- secret sauce for intelligence/small number of breakthroughs: "small number of breakthroughs" says that the last added piece must have been a significant piece (which is what a breakthrough is).
I'm not sure how different these two actually are. But when thinking about discontinuities, I've noticed that I am somewhat inconsistent about conflating these two and distinctly visualizing them.
Term | Is the final piece a big breakthrough? | Nature of final piece | Found by humans or found by AI? | Length of lead time prior to final piece | Number of pieces | Explanation |
---|---|---|---|---|---|---|
Missing gear | ||||||
Secret sauce | ||||||
One wrong number function | ||||||
Step function | ||||||
Understanding is discontinuous | ||||||
Payoff thresholds | ||||||
One algorithm | ||||||
Lumpy AI progress | ||||||
Intelligibility of intelligence | ||||||
Simple core algorithm | ||||||
Small number of breakthroughs needed for AGI | ||||||
Good consequentialist reasoning has low Kolmogorov complexity[1] |