Difference between revisions of "AI safety field consensus"
(Created page with "People in AI safety tend to disagree about many things. However, there is also wide agreement about some other things (which people...") |
|||
Line 6: | Line 6: | ||
* Goodhart problems | * Goodhart problems | ||
* AGI possible in principle | * AGI possible in principle | ||
+ | |||
+ | one operationalization might be something like: what are the things relevant to AI safety that all of [[Eliezer Yudkowsky]], [[Paul Christiano]], [[Robin Hanson]], [[Rohin Shah]], [[Dario Amodei]], and [[Wei Dai]] agree on? |
Revision as of 03:33, 25 February 2020
People in AI safety tend to disagree about many things. However, there is also wide agreement about some other things (which people outside the field often disagree about).
- Orthogonality thesis
- Instrumental convergence
- Edge instantiation / patch resistance
- Goodhart problems
- AGI possible in principle
one operationalization might be something like: what are the things relevant to AI safety that all of Eliezer Yudkowsky, Paul Christiano, Robin Hanson, Rohin Shah, Dario Amodei, and Wei Dai agree on?