Difference between revisions of "Missing gear vs secret sauce"

From Issawiki
Jump to: navigation, search
(Created page with "I want to distinguish between the following two framings: * missing gear/one wrong number function: * secret sauce for intelligence/small number of breakthroughs: Cate...")
 
 
(27 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
I want to distinguish between the following two framings:
 
I want to distinguish between the following two framings:
  
* missing gear/one wrong number function:
+
* [[Missing gear for intelligence|missing gear]]/[[one wrong number problem]]/step function/understanding is discontinuous/[https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/#Payoff_thresholds payoff thresholds]: "missing gear" doesn't imply that the last piece added is all that significant -- it just says that adding it caused a huge jump in capabilities.
* [[secret sauce for intelligence]]/small number of breakthroughs:
+
* [[secret sauce for intelligence]]/small number of breakthroughs: "small number of breakthroughs" says that the last added piece must have been a significant piece (which is what a breakthrough is).
 +
 
 +
I'm not sure how different these two actually are. But when thinking about discontinuities, I've noticed that I am somewhat inconsistent about conflating these two and distinctly visualizing them.
 +
 
 +
{| class="sortable wikitable"
 +
! Term !! Is the final piece a big breakthrough? !! Nature of final piece !! Found by humans or found by AI? !! Length of lead time prior to final piece !! Number of pieces !! Explanation
 +
|-
 +
| Missing gear || Not necessarily. I think this term is somewhat ambiguous about whether the final piece is expected to be big vs small. ||
 +
|-
 +
| Secret sauce || Yes || || || || Small number, possibly one? ||
 +
|-
 +
| One wrong number function / Step function ||
 +
|-
 +
| Understanding is discontinuous || Not necessarily || Restricts the final piece to be about understanding, where the AI goes from "not understanding" to "understanding" something. ||
 +
|-
 +
| Payoff thresholds || Not necessarily || Does not specify ||
 +
|-
 +
| One algorithm<ref>https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/#One_algorithm</ref> || Yes || || || || Small number, possibly one? ||
 +
|-
 +
| Lumpy AI progress || Yes ||
 +
|-
 +
| Intelligibility of intelligence || Yes ||
 +
|-
 +
| Simple core algorithm || Yes ||
 +
|-
 +
| Small number of breakthroughs needed for AGI || Yes || || || || Small number (up to around 10?) ||
 +
|-
 +
| Good consequentialist reasoning has low Kolmogorov complexity<ref>https://agentfoundations.org/item?id=1228</ref> || Yes || || I think MIRI wants humans to discover this, for the sake of being able to align the AI. But this core of good consequentialist reasoning can also be discovered by a search process (e.g. resulting in a mesa-optimizer). || || Small number? ||
 +
|}
 +
 
 +
==References==
 +
 
 +
<references/>
  
 
[[Category:AI safety]]
 
[[Category:AI safety]]

Latest revision as of 21:16, 9 June 2020

I want to distinguish between the following two framings:

  • missing gear/one wrong number problem/step function/understanding is discontinuous/payoff thresholds: "missing gear" doesn't imply that the last piece added is all that significant -- it just says that adding it caused a huge jump in capabilities.
  • secret sauce for intelligence/small number of breakthroughs: "small number of breakthroughs" says that the last added piece must have been a significant piece (which is what a breakthrough is).

I'm not sure how different these two actually are. But when thinking about discontinuities, I've noticed that I am somewhat inconsistent about conflating these two and distinctly visualizing them.

Term Is the final piece a big breakthrough? Nature of final piece Found by humans or found by AI? Length of lead time prior to final piece Number of pieces Explanation
Missing gear Not necessarily. I think this term is somewhat ambiguous about whether the final piece is expected to be big vs small.
Secret sauce Yes Small number, possibly one?
One wrong number function / Step function
Understanding is discontinuous Not necessarily Restricts the final piece to be about understanding, where the AI goes from "not understanding" to "understanding" something.
Payoff thresholds Not necessarily Does not specify
One algorithm[1] Yes Small number, possibly one?
Lumpy AI progress Yes
Intelligibility of intelligence Yes
Simple core algorithm Yes
Small number of breakthroughs needed for AGI Yes Small number (up to around 10?)
Good consequentialist reasoning has low Kolmogorov complexity[2] Yes I think MIRI wants humans to discover this, for the sake of being able to align the AI. But this core of good consequentialist reasoning can also be discovered by a search process (e.g. resulting in a mesa-optimizer). Small number?

References