Aligning smart AI using slightly less smart AI

From Issawiki
Revision as of 11:21, 26 February 2022 by Issa (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

A strategy that some (who focus on machine learning safety) have cited for their relative optimism on the difficulty of AI alignment: we humans wouldn't need to directly align a superintelligence, but rather only need to align AI systems slightly smarter than ourselves, and from there, each "generation" of AI systems will align slightly smarter systems, and so on.

External links

What links here