Difference between revisions of "Aligning smart AI using slightly less smart AI"

From Issawiki
Jump to: navigation, search
(Created page with "A strategy that some (who focus on machine learning safety) have cited for their relative optimism on the difficulty of AI alignment: we humans wouldn't need to direct...")
 
Line 5: Line 5:
 
* [[Richard Ngo]] brings up this argument in [https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#1_1__Deep_vs__shallow_problem_solving_patterns]
 
* [[Richard Ngo]] brings up this argument in [https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty#1_1__Deep_vs__shallow_problem_solving_patterns]
  
 +
==What links here==
  
 +
{{Special:WhatLinksHere/{{FULLPAGENAME}}}}
  
 
[[Category:AI safety]]
 
[[Category:AI safety]]

Revision as of 11:13, 26 February 2022

A strategy that some (who focus on machine learning safety) have cited for their relative optimism on the difficulty of AI alignment: we humans wouldn't need to directly align a superintelligence, but rather only need to align AI systems slightly smarter than ourselves, and from there, each "generation" of AI systems will align slightly smarter systems, and so on.

External links

What links here