Difference between revisions of "List of AI safety projects I could work on"
(→The Vipul Strategy) |
|||
Line 4: | Line 4: | ||
* writing some sort of overview of my beliefs regarding AI safety. like, if i was explaining things from scratch to someone, what would that sound like? | * writing some sort of overview of my beliefs regarding AI safety. like, if i was explaining things from scratch to someone, what would that sound like? | ||
− | * my current take on AI timelines | + | * my current take on [[AI timelines]] |
− | * my current take on AI takeoff | + | * my current take on [[AI takeoff]] |
* my current take on MIRI vs Paul | * my current take on MIRI vs Paul | ||
==Research projects== | ==Research projects== | ||
− | * continue working out AI takeoff disagreements | + | * continue working out [[AI takeoff]] disagreements |
* continue working out MIRI vs Paul | * continue working out MIRI vs Paul | ||
− | * HRAD paper with David Manheim | + | * HRAD paper with [[David Manheim]] |
* concrete plausible scenarios for what could happen when AGI comes around | * concrete plausible scenarios for what could happen when AGI comes around | ||
* deep dive into human evolution to figure out what the heck it might tell us about AI takeoff/AI timelines | * deep dive into human evolution to figure out what the heck it might tell us about AI takeoff/AI timelines | ||
Line 24: | Line 24: | ||
==The Vipul Strategy== | ==The Vipul Strategy== | ||
− | * Expanding AI Watch to cover agendas, documents, more people/orgs with more columns filled in, graphs, automated [https://aiwatch.issarice.com/compare.php?by=organization&for=2020 yearly review] of orgs, etc. | + | * Expanding [[AI Watch]] to cover agendas, documents, more people/orgs with more columns filled in, graphs, automated [https://aiwatch.issarice.com/compare.php?by=organization&for=2020 yearly review] of orgs, etc. |
* Updates to timeline of AI safety and other relevant timelines | * Updates to timeline of AI safety and other relevant timelines | ||
− | * Timeline of Eliezer Yudkowsky publications https://github.com/riceissa/project-ideas/issues/16 | + | * Timeline of [[Eliezer Yudkowsky]] publications https://github.com/riceissa/project-ideas/issues/16 |
* Wikipedia pages for AGI projects https://github.com/riceissa/project-ideas/issues/22 | * Wikipedia pages for AGI projects https://github.com/riceissa/project-ideas/issues/22 | ||
Line 36: | Line 36: | ||
==Exposition of technical topics== | ==Exposition of technical topics== | ||
− | * Solomonoff induction guide (I think I've already figured out things here that are not explained anywhere, so I think I could write the best guide on it, but it's not clear how important this is for people to understand) | + | * [[Solomonoff induction]] guide (I think I've already figured out things here that are not explained anywhere, so I think I could write the best guide on it, but it's not clear how important this is for people to understand) |
− | * | + | * [[Judea Pearl|Pearl]] [[belief propagation]] guide |
− | * Summarizing/distilling work that has been done in decision theory | + | * Summarizing/distilling work that has been done in [[decision theory]] |
* starting something like [[RAISE]]; see [[There is room for something like RAISE]] | * starting something like [[RAISE]]; see [[There is room for something like RAISE]] | ||
Line 47: | Line 47: | ||
==Philosophy== | ==Philosophy== | ||
− | * human values/deliberation | + | * human values/[[deliberation]] |
==Learn stuff== | ==Learn stuff== |
Revision as of 22:11, 10 November 2020
(November 2020)
Contents
Writing up my opinions
- writing some sort of overview of my beliefs regarding AI safety. like, if i was explaining things from scratch to someone, what would that sound like?
- my current take on AI timelines
- my current take on AI takeoff
- my current take on MIRI vs Paul
Research projects
- continue working out AI takeoff disagreements
- continue working out MIRI vs Paul
- HRAD paper with David Manheim
- concrete plausible scenarios for what could happen when AGI comes around
- deep dive into human evolution to figure out what the heck it might tell us about AI takeoff/AI timelines
- comparison of AI and nukes
- think about AI polarization, using examples like COVID and climate change
AI safety wiki
- Writing articles for AI safety wiki
The Vipul Strategy
- Expanding AI Watch to cover agendas, documents, more people/orgs with more columns filled in, graphs, automated yearly review of orgs, etc.
- Updates to timeline of AI safety and other relevant timelines
- Timeline of Eliezer Yudkowsky publications https://github.com/riceissa/project-ideas/issues/16
- Wikipedia pages for AGI projects https://github.com/riceissa/project-ideas/issues/22
Increase activity on LessWrong
- Ask lots of questions on LW, e.g.:
- Applicability of FDT/UDT/TDT to everyday life https://github.com/riceissa/project-ideas/issues/44
Exposition of technical topics
- Solomonoff induction guide (I think I've already figured out things here that are not explained anywhere, so I think I could write the best guide on it, but it's not clear how important this is for people to understand)
- Pearl belief propagation guide
- Summarizing/distilling work that has been done in decision theory
- starting something like RAISE; see There is room for something like RAISE
Personal reflection
- reflection post about getting involved with AI safety
Philosophy
- human values/deliberation
Learn stuff
- learn more machine learning so I can better follow some discussions