Difference between revisions of "Iterated amplification"
(One intermediate revision by the same user not shown) | |||
Line 6: | Line 6: | ||
* [[adequate oversight]] | * [[adequate oversight]] | ||
* [[overseer]] | * [[overseer]] | ||
− | * [[bandwidth of the overseer]] | + | * [[bandwidth of the overseer]], [[high bandwidth oversight]], [[low bandwidth oversight]] |
* [[reward engineering]] | * [[reward engineering]] | ||
* [[HCH]], [[Strong HCH]], [[Weak HCH]], [[Humans consulting HCH]] | * [[HCH]], [[Strong HCH]], [[Weak HCH]], [[Humans consulting HCH]] | ||
Line 37: | Line 37: | ||
[[Category:Iterated amplification]] | [[Category:Iterated amplification]] | ||
+ | [[Category:AI safety]] |
Latest revision as of 03:58, 26 April 2020
Iterated amplification (also called iterated distillation and amplification, and abbreviated IDA) is the technical alignment agenda that Paul Christiano works on.
Terminology (not necessarily about IDA, but these are some terms frequently used by Paul):
- informed oversight
- adequate oversight
- overseer
- bandwidth of the overseer, high bandwidth oversight, low bandwidth oversight
- reward engineering
- HCH, Strong HCH, Weak HCH, Humans consulting HCH
- amplification
- capability amplification
- distillation
- factored cognition, factored evaluation, factored generation
- corrigibility
- benign
- aligned
- robustness
- red teaming
- ALBA
- optimization daemons
- act-based agent vs goal-directed agent
- approval-directed agent
- steering problem
- prosaic AI
- bootstrapping
- catastrophe
- reliability amplification
- security amplification
- universality
- narrow value learning vs ambitious value learning
- learning with catastrophes, optimizing worst-case performance