Difference between revisions of "List of AI safety projects I could work on"

From Issawiki
Jump to: navigation, search
(Research projects)
Line 15: Line 15:
 
* concrete plausible scenarios for what could happen when AGI comes around
 
* concrete plausible scenarios for what could happen when AGI comes around
 
* deep dive into human evolution to figure out what the heck it might tell us about AI takeoff/AI timelines
 
* deep dive into human evolution to figure out what the heck it might tell us about AI takeoff/AI timelines
 +
* comparison of AI and nukes
  
 
==AI safety wiki==
 
==AI safety wiki==

Revision as of 22:54, 9 November 2020

(November 2020)

Writing up my opinions

  • writing some sort of overview of my beliefs regarding AI safety. like, if i was explaining things from scratch to someone, what would that sound like?
  • my current take on AI timelines
  • my current take on AI takeoff
  • my current take on MIRI vs Paul

Research projects

  • continue working out AI takeoff disagreements
  • continue working out MIRI vs Paul
  • HRAD paper with David Manheim
  • concrete plausible scenarios for what could happen when AGI comes around
  • deep dive into human evolution to figure out what the heck it might tell us about AI takeoff/AI timelines
  • comparison of AI and nukes

AI safety wiki

  • Writing articles for AI safety wiki

The Vipul Strategy

Increase activity on LessWrong

Exposition of technical topics

  • Solomonoff induction guide (I think I've already figured out things here that are not explained anywhere, so I think I could write the best guide on it, but it's not clear how important this is for people to understand)
  • pearl belief propagation guide
  • Summarizing/distilling work that has been done in decision theory

Personal reflection

  • reflection post about getting involved with AI safety