Difference between revisions of "List of AI safety projects I could work on"

From Issawiki
Jump to: navigation, search
(The Vipul Strategy)
Line 24: Line 24:
 
==The Vipul Strategy==
 
==The Vipul Strategy==
  
* AI Watch
+
* Expanding AI Watch to cover agendas, documents, more people/orgs with more columns filled in, graphs, automated [https://aiwatch.issarice.com/compare.php?by=organization&for=2020 yearly review] of orgs, etc.
 
* Updates to timeline of AI safety and other relevant timelines
 
* Updates to timeline of AI safety and other relevant timelines
 
* Timeline of Eliezer Yudkowsky publications https://github.com/riceissa/project-ideas/issues/16
 
* Timeline of Eliezer Yudkowsky publications https://github.com/riceissa/project-ideas/issues/16

Revision as of 23:13, 9 November 2020

(November 2020)

Writing up my opinions

  • writing some sort of overview of my beliefs regarding AI safety. like, if i was explaining things from scratch to someone, what would that sound like?
  • my current take on AI timelines
  • my current take on AI takeoff
  • my current take on MIRI vs Paul

Research projects

  • continue working out AI takeoff disagreements
  • continue working out MIRI vs Paul
  • HRAD paper with David Manheim
  • concrete plausible scenarios for what could happen when AGI comes around
  • deep dive into human evolution to figure out what the heck it might tell us about AI takeoff/AI timelines
  • comparison of AI and nukes
  • think about AI polarization, using examples like COVID and climate change

AI safety wiki

  • Writing articles for AI safety wiki

The Vipul Strategy

Increase activity on LessWrong

Exposition of technical topics

  • Solomonoff induction guide (I think I've already figured out things here that are not explained anywhere, so I think I could write the best guide on it, but it's not clear how important this is for people to understand)
  • pearl belief propagation guide
  • Summarizing/distilling work that has been done in decision theory
  • starting something like RAISE; see There is room for something like RAISE

Personal reflection

  • reflection post about getting involved with AI safety

Philosophy

  • human values/deliberation

Learn stuff

  • learn more machine learning so I can better follow some discussions