List of AI safety projects I could work on
(November 2020)
Contents
Writing up my opinions
- writing some sort of overview of my beliefs regarding AI safety. like, if i was explaining things from scratch to someone, what would that sound like?
- my current take on AI timelines
- my current take on AI takeoff
- my current take on MIRI vs Paul
Research projects
- continue working out AI takeoff disagreements
- continue working out MIRI vs Paul
- HRAD paper with David Manheim
- concrete plausible scenarios for what could happen when AGI comes around
- deep dive into human evolution to figure out what the heck it might tell us about AI takeoff/AI timelines
- comparison of AI and nukes
- think about AI polarization, using examples like COVID and climate change
- write up my understanding of people's views on whole brain emulation
Technical AI safety
- I still don't feel motivated to do this since I don't feel convinced by any existing worldview/visualization that has been put forth (by e.g. MIRI or Paul)
AI safety wiki
- Writing articles for AI safety wiki
The Vipul Strategy
- Expanding AI Watch to cover agendas, documents, more people/orgs with more columns filled in, graphs, automated yearly review of orgs, etc.
- Updates to timeline of AI safety and other relevant timelines
- Timeline of Eliezer Yudkowsky publications https://github.com/riceissa/project-ideas/issues/16
- Wikipedia pages for AGI projects https://github.com/riceissa/project-ideas/issues/22
Increase activity on LessWrong
- Ask lots of questions on LW, e.g.:
- Applicability of FDT/UDT/TDT to everyday life https://github.com/riceissa/project-ideas/issues/44
Exposition of technical topics
- Solomonoff induction guide (I think I've already figured out things here that are not explained anywhere, so I think I could write the best guide on it, but it's not clear how important this is for people to understand)
- Pearl belief propagation guide
- Summarizing/distilling work that has been done in decision theory
- starting something like RAISE; see There is room for something like RAISE
Personal reflection
- reflection post about getting involved with AI safety
Philosophy
- human values/deliberation
Learn stuff
- learn more machine learning so I can better follow some discussions
- learn more economics. I somewhat often feel confused about how to think about various AI strategy questions (e.g. history of growth, how AI systems might cooperate and increase economies of scale, what percentage of GDP might be used for computing costs) and I suspect part of the reason is that I don't know enough economics.
- understand the neuromorphic AI pathways that people like Steve Byrnes and gwern have been talking about
- there are a lot of blog posts that I could catch up on