Future planning
things to talk about:
- how doomed ML safety approaches are e.g. see discussion here -- How doomed are ML safety approaches?
- can MIRI-type research be done in time to help with AGI? see this comment
- prior on difficulty of alignment, and ideas like "if ML-based safety were to have any shot at working, wouldn't we just go all the way and expect the default (no EA intervention) approach to AGI to just produce basically ok outcomes?"