Difference between revisions of "Future planning"

From Issawiki
Jump to: navigation, search
(Created page with "things to talk about: * how doomed ML safety approaches are e.g. see discussion [https://www.greaterwrong.com/posts/suxvE2ddnYMPJN9HD/realism-about-rationality#comment-32dCL2...")
 
Line 2: Line 2:
  
 
* how doomed ML safety approaches are e.g. see discussion [https://www.greaterwrong.com/posts/suxvE2ddnYMPJN9HD/realism-about-rationality#comment-32dCL2u6p8L8td9BA here]
 
* how doomed ML safety approaches are e.g. see discussion [https://www.greaterwrong.com/posts/suxvE2ddnYMPJN9HD/realism-about-rationality#comment-32dCL2u6p8L8td9BA here]
 +
* can MIRI-type research be done in time to help with AGI? see [https://www.greaterwrong.com/posts/suxvE2ddnYMPJN9HD/realism-about-rationality#comment-Dk5LmWMEL55ufkTB5 this comment]

Revision as of 22:36, 19 February 2020

things to talk about:

  • how doomed ML safety approaches are e.g. see discussion here
  • can MIRI-type research be done in time to help with AGI? see this comment