Future planning

From Issawiki
Revision as of 00:19, 20 February 2020 by Issa (talk | contribs)
Jump to: navigation, search

things to talk about:

  • how doomed ML safety approaches are e.g. see discussion here -- How doomed are ML safety approaches?
  • can MIRI-type research be done in time to help with AGI? see this comment
  • prior on difficulty of alignment, and ideas like "if ML-based safety were to have any shot at working, wouldn't we just go all the way and expect the default (no EA intervention) approach to AGI to just produce basically ok outcomes?"