Missing gear vs secret sauce
I want to distinguish between the following two framings:
- missing gear/one wrong number problem/step function/understanding is discontinuous/payoff thresholds: "missing gear" doesn't imply that the last piece added is all that significant -- it just says that adding it caused a huge jump in capabilities.
- secret sauce for intelligence/small number of breakthroughs: "small number of breakthroughs" says that the last added piece must have been a significant piece (which is what a breakthrough is).
I'm not sure how different these two actually are. But when thinking about discontinuities, I've noticed that I am somewhat inconsistent about conflating these two and distinctly visualizing them.
Term | Is the final piece a big breakthrough? | Nature of final piece | Found by humans or found by AI? | Length of lead time prior to final piece | Number of pieces | Explanation |
---|---|---|---|---|---|---|
Missing gear | Not necessarily. I think this term is somewhat ambiguous about whether the final piece is expected to be big vs small. | |||||
Secret sauce | Yes | Small number, possibly one? | ||||
One wrong number function / Step function | ||||||
Understanding is discontinuous | Not necessarily | Restricts the final piece to be about understanding, where the AI goes from "not understanding" to "understanding" something. | ||||
Payoff thresholds | Not necessarily | Does not specify | ||||
One algorithm[1] | Yes | Small number, possibly one? | ||||
Lumpy AI progress | Yes | |||||
Intelligibility of intelligence | Yes | |||||
Simple core algorithm | Yes | |||||
Small number of breakthroughs needed for AGI | Yes | Small number (up to around 10?) | ||||
Good consequentialist reasoning has low Kolmogorov complexity[2] | Yes | I think MIRI wants humans to discover this, for the sake of being able to align the AI. But this core of good consequentialist reasoning can also be discovered by a search process (e.g. resulting in a mesa-optimizer). | Small number? |