Difference between revisions of "Late 2021 MIRI conversations"

From Issawiki
Jump to: navigation, search
Line 5: Line 5:
 
|-
 
|-
 
| [https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/gf9hhmSvpZfyfS34B Ngo's view on alignment difficulty]
 
| [https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/gf9hhmSvpZfyfS34B Ngo's view on alignment difficulty]
| [[Richard Ngo]] puts forth his own case about why he is more optimistic (compared to [[Eliezer Yudkowsky]]) about humanity handling the creation of [[AGI]] well. Ngo's case relies on several points: (1) he expects a [[continuous takeoff]] where more and more tasks are automated (including the ability to "only to answer questions" but to do so at human level) without achieving AGI; (2) the difficulty of achieving AGI (he distinguishes between task-based reinforcement learning and open-ended reinforcement learning, and says the latter is what leads to AI catastrophe, but also that the latter is much more difficult due to slowness of real-world feedback and the difficulty of creating sufficiently rich artificial environments); (3) "The US and China preventing any other country from becoming a leader in AI requires about as much competent power as banning chemical/biological weapons"; (4) there is enough competent power at the level of 'banning chemical/biological weapons'; (5) this competent power will be used to halt progress on AI outside a US-China collaboration (?) (the optimism here relies on (1): continuous takeoff allows compelling cases of misalignment to occur and convince governments). So now his actual case (which is only implicit in the post) is something like: The difficulty of AGI buys us some time (2). Meanwhile, progress on task-based/narrow AI will continue (1), and will produce compelling cases of AI misalignment, leading to US/China halt progress on AI outside a US-China collaboration (3,4,5).
+
| [[Richard Ngo]] puts forth his own case about why he is more optimistic (compared to [[Eliezer Yudkowsky]]) about humanity handling the creation of [[AGI]] well. Ngo's case relies on several points: (1) he expects a [[continuous takeoff]] where more and more tasks are automated (including the ability to "only to answer questions" but to do so at human level) without achieving AGI; (2) the difficulty of achieving AGI (he distinguishes between task-based reinforcement learning and open-ended reinforcement learning, and says the latter is what leads to AI catastrophe, but also that the latter is much more difficult due to slowness of real-world feedback and the difficulty of creating sufficiently rich artificial environments); (3) "The US and China preventing any other country from becoming a leader in AI requires about as much competent power as banning chemical/biological weapons"; (4) there is enough competent power at the level of 'banning chemical/biological weapons'; (5) this competent power will be used to halt progress on AI outside a US-China collaboration (?) (the optimism here relies on (1): continuous takeoff allows compelling cases of misalignment to occur and convince governments). So now his actual case (which is only implicit in the post, so I am reading between the lines here) is something like: The difficulty of AGI buys us some time (2). Meanwhile, progress on task-based/narrow AI will continue (1), and will produce compelling cases of AI misalignment, leading to US/China halt progress on AI outside a US-China collaboration (3,4,5).
 
|
 
|
 
| [[ASML]]
 
| [[ASML]]

Revision as of 19:59, 20 April 2022

https://www.lesswrong.com/s/n945eovrA3oDueqtq

Title Summary My thoughts Further reading/keywords
Ngo's view on alignment difficulty Richard Ngo puts forth his own case about why he is more optimistic (compared to Eliezer Yudkowsky) about humanity handling the creation of AGI well. Ngo's case relies on several points: (1) he expects a continuous takeoff where more and more tasks are automated (including the ability to "only to answer questions" but to do so at human level) without achieving AGI; (2) the difficulty of achieving AGI (he distinguishes between task-based reinforcement learning and open-ended reinforcement learning, and says the latter is what leads to AI catastrophe, but also that the latter is much more difficult due to slowness of real-world feedback and the difficulty of creating sufficiently rich artificial environments); (3) "The US and China preventing any other country from becoming a leader in AI requires about as much competent power as banning chemical/biological weapons"; (4) there is enough competent power at the level of 'banning chemical/biological weapons'; (5) this competent power will be used to halt progress on AI outside a US-China collaboration (?) (the optimism here relies on (1): continuous takeoff allows compelling cases of misalignment to occur and convince governments). So now his actual case (which is only implicit in the post, so I am reading between the lines here) is something like: The difficulty of AGI buys us some time (2). Meanwhile, progress on task-based/narrow AI will continue (1), and will produce compelling cases of AI misalignment, leading to US/China halt progress on AI outside a US-China collaboration (3,4,5). ASML

See also

What links here