Difference between revisions of "List of AI safety projects I could work on"
(→Research projects) |
|||
Line 43: | Line 43: | ||
* reflection post about getting involved with AI safety | * reflection post about getting involved with AI safety | ||
+ | |||
+ | ==Philosophy== | ||
+ | |||
+ | * human values/deliberation | ||
[[Category:AI safety]] | [[Category:AI safety]] |
Revision as of 22:59, 9 November 2020
(November 2020)
Contents
Writing up my opinions
- writing some sort of overview of my beliefs regarding AI safety. like, if i was explaining things from scratch to someone, what would that sound like?
- my current take on AI timelines
- my current take on AI takeoff
- my current take on MIRI vs Paul
Research projects
- continue working out AI takeoff disagreements
- continue working out MIRI vs Paul
- HRAD paper with David Manheim
- concrete plausible scenarios for what could happen when AGI comes around
- deep dive into human evolution to figure out what the heck it might tell us about AI takeoff/AI timelines
- comparison of AI and nukes
- think about AI polarization, using examples like COVID and climate change
AI safety wiki
- Writing articles for AI safety wiki
The Vipul Strategy
- AI Watch
- Updates to timeline of AI safety and other relevant timelines
- Timeline of Eliezer Yudkowsky publications https://github.com/riceissa/project-ideas/issues/16
- Wikipedia pages for AGI projects https://github.com/riceissa/project-ideas/issues/22
Increase activity on LessWrong
- Ask lots of questions on LW, e.g.:
- Applicability of FDT/UDT/TDT to everyday life https://github.com/riceissa/project-ideas/issues/44
Exposition of technical topics
- Solomonoff induction guide (I think I've already figured out things here that are not explained anywhere, so I think I could write the best guide on it, but it's not clear how important this is for people to understand)
- pearl belief propagation guide
- Summarizing/distilling work that has been done in decision theory
Personal reflection
- reflection post about getting involved with AI safety
Philosophy
- human values/deliberation