Future planning

From Issawiki
Revision as of 03:47, 20 February 2020 by Issa (talk | contribs)
Jump to: navigation, search

things to talk about:

  • how doomed ML safety approaches are e.g. see discussion here -- How doomed are ML safety approaches?
    • there's the sort of opposite question of, how doomed is MIRI's approach? i.e. if there turns out to be no simple core algorithm for agency, or if understanding agency better doesn't help us build an AGI, then we might not be in a better place wrt aligning AI.
  • can MIRI-type research be done in time to help with AGI? see this comment
  • prior on difficulty of alignment, and ideas like "if ML-based safety were to have any shot at working, wouldn't we just go all the way and expect the default (no EA intervention) approach to AGI to just produce basically ok outcomes?"