Difference between revisions of "List of AI safety projects I could work on"
Line 6: | Line 6: | ||
** my current take on AI takeoff | ** my current take on AI takeoff | ||
** my current take on MIRI vs Paul | ** my current take on MIRI vs Paul | ||
+ | * Research projects | ||
+ | ** continue working out AI takeoff disagreements | ||
+ | ** continue working out MIRI vs Paul | ||
* Exposition | * Exposition | ||
** Solomonoff induction guide (I think I've already figured out things here that are not explained anywhere, so I think I could write the best guide on it, but it's not clear how important this is for people to understand) | ** Solomonoff induction guide (I think I've already figured out things here that are not explained anywhere, so I think I could write the best guide on it, but it's not clear how important this is for people to understand) |
Revision as of 22:36, 9 November 2020
(November 2020)
- Write up my opinions
- writing some sort of overview of my beliefs regarding AI safety. like, if i was explaining things from scratch to someone, what would that sound like?
- my current take on AI timelines
- my current take on AI takeoff
- my current take on MIRI vs Paul
- Research projects
- continue working out AI takeoff disagreements
- continue working out MIRI vs Paul
- Exposition
- Solomonoff induction guide (I think I've already figured out things here that are not explained anywhere, so I think I could write the best guide on it, but it's not clear how important this is for people to understand)
- pearl belief propagation guide
- Summarizing/distilling work that has been done in decision theory
- Ask lots of questions on LW