Difference between revisions of "Missing gear vs secret sauce"

From Issawiki
Jump to: navigation, search
 
(9 intermediate revisions by the same user not shown)
Line 7: Line 7:
  
 
{| class="sortable wikitable"
 
{| class="sortable wikitable"
! Term !! Is the final piece a big breakthrough? !! Nature of final piece !! Explanation
+
! Term !! Is the final piece a big breakthrough? !! Nature of final piece !! Found by humans or found by AI? !! Length of lead time prior to final piece !! Number of pieces !! Explanation
 
|-
 
|-
| Missing gear ||
+
| Missing gear || Not necessarily. I think this term is somewhat ambiguous about whether the final piece is expected to be big vs small. ||
 
|-
 
|-
| Secret sauce ||
+
| Secret sauce || Yes || || || || Small number, possibly one? ||
 
|-
 
|-
| One wrong number function ||
+
| One wrong number function / Step function ||
 
|-
 
|-
| Step function ||
+
| Understanding is discontinuous || Not necessarily || Restricts the final piece to be about understanding, where the AI goes from "not understanding" to "understanding" something. ||
 
|-
 
|-
| Understanding is discontinuous ||
+
| Payoff thresholds || Not necessarily || Does not specify ||
 
|-
 
|-
| Payoff thresholds ||
+
| One algorithm<ref>https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/#One_algorithm</ref> || Yes || || || || Small number, possibly one? ||
 
|-
 
|-
| One algorithm ||
+
| Lumpy AI progress || Yes ||
 
|-
 
|-
| Lumpy AI progress ||
+
| Intelligibility of intelligence || Yes ||
 
|-
 
|-
| Intelligibility of intelligence ||
+
| Simple core algorithm || Yes ||
 
|-
 
|-
| Simple core algorithm ||
+
| Small number of breakthroughs needed for AGI || Yes || || || || Small number (up to around 10?) ||
 
|-
 
|-
| Small number of breakthroughs needed for AGI ||
+
| Good consequentialist reasoning has low Kolmogorov complexity<ref>https://agentfoundations.org/item?id=1228</ref> || Yes || || I think MIRI wants humans to discover this, for the sake of being able to align the AI. But this core of good consequentialist reasoning can also be discovered by a search process (e.g. resulting in a mesa-optimizer). || || Small number? ||
|-
 
| Good consequentialist reasoning has low Kolmogorov complexity ||
 
 
|}
 
|}
 +
 +
==References==
 +
 +
<references/>
  
 
[[Category:AI safety]]
 
[[Category:AI safety]]

Latest revision as of 21:16, 9 June 2020

I want to distinguish between the following two framings:

  • missing gear/one wrong number problem/step function/understanding is discontinuous/payoff thresholds: "missing gear" doesn't imply that the last piece added is all that significant -- it just says that adding it caused a huge jump in capabilities.
  • secret sauce for intelligence/small number of breakthroughs: "small number of breakthroughs" says that the last added piece must have been a significant piece (which is what a breakthrough is).

I'm not sure how different these two actually are. But when thinking about discontinuities, I've noticed that I am somewhat inconsistent about conflating these two and distinctly visualizing them.

Term Is the final piece a big breakthrough? Nature of final piece Found by humans or found by AI? Length of lead time prior to final piece Number of pieces Explanation
Missing gear Not necessarily. I think this term is somewhat ambiguous about whether the final piece is expected to be big vs small.
Secret sauce Yes Small number, possibly one?
One wrong number function / Step function
Understanding is discontinuous Not necessarily Restricts the final piece to be about understanding, where the AI goes from "not understanding" to "understanding" something.
Payoff thresholds Not necessarily Does not specify
One algorithm[1] Yes Small number, possibly one?
Lumpy AI progress Yes
Intelligibility of intelligence Yes
Simple core algorithm Yes
Small number of breakthroughs needed for AGI Yes Small number (up to around 10?)
Good consequentialist reasoning has low Kolmogorov complexity[2] Yes I think MIRI wants humans to discover this, for the sake of being able to align the AI. But this core of good consequentialist reasoning can also be discovered by a search process (e.g. resulting in a mesa-optimizer). Small number?

References