Difference between revisions of "Missing gear vs secret sauce"
| Line 9: | Line 9: | ||
! Term !! Is the final piece a big breakthrough? !! Nature of final piece !! Found by humans or found by AI? !! Length of lead time prior to final piece !! Number of pieces !! Explanation | ! Term !! Is the final piece a big breakthrough? !! Nature of final piece !! Found by humans or found by AI? !! Length of lead time prior to final piece !! Number of pieces !! Explanation | ||
|- | |- | ||
| − | | Missing gear || | + | | Missing gear || Not necessarily || |
|- | |- | ||
| − | | Secret sauce || | + | | Secret sauce || Yes || |
|- | |- | ||
| − | | One wrong number function || | + | | One wrong number function / Step function || |
|- | |- | ||
| − | | | + | | Understanding is discontinuous || Not necessarily || |
|- | |- | ||
| − | | | + | | Payoff thresholds || Not necessarily || |
|- | |- | ||
| − | | | + | | One algorithm || Yes || |
|- | |- | ||
| − | | | + | | Lumpy AI progress || Yes || |
|- | |- | ||
| − | | | + | | Intelligibility of intelligence || Yes || |
|- | |- | ||
| − | | | + | | Simple core algorithm || Yes || |
|- | |- | ||
| − | | | + | | Small number of breakthroughs needed for AGI || Yes || |
|- | |- | ||
| − | + | | Good consequentialist reasoning has low Kolmogorov complexity<ref>https://agentfoundations.org/item?id=1228</ref> || Yes || | |
| − | |||
| − | | Good consequentialist reasoning has low Kolmogorov complexity<ref>https://agentfoundations.org/item?id=1228</ref> || | ||
|} | |} | ||
Revision as of 20:55, 9 June 2020
I want to distinguish between the following two framings:
- missing gear/one wrong number problem/step function/understanding is discontinuous/payoff thresholds: "missing gear" doesn't imply that the last piece added is all that significant -- it just says that adding it caused a huge jump in capabilities.
- secret sauce for intelligence/small number of breakthroughs: "small number of breakthroughs" says that the last added piece must have been a significant piece (which is what a breakthrough is).
I'm not sure how different these two actually are. But when thinking about discontinuities, I've noticed that I am somewhat inconsistent about conflating these two and distinctly visualizing them.
| Term | Is the final piece a big breakthrough? | Nature of final piece | Found by humans or found by AI? | Length of lead time prior to final piece | Number of pieces | Explanation |
|---|---|---|---|---|---|---|
| Missing gear | Not necessarily | |||||
| Secret sauce | Yes | |||||
| One wrong number function / Step function | ||||||
| Understanding is discontinuous | Not necessarily | |||||
| Payoff thresholds | Not necessarily | |||||
| One algorithm | Yes | |||||
| Lumpy AI progress | Yes | |||||
| Intelligibility of intelligence | Yes | |||||
| Simple core algorithm | Yes | |||||
| Small number of breakthroughs needed for AGI | Yes | |||||
| Good consequentialist reasoning has low Kolmogorov complexity[1] | Yes |