Difference between revisions of "List of AI safety projects I could work on"

From Issawiki
Jump to: navigation, search
Line 1: Line 1:
 
(November 2020)
 
(November 2020)
  
* Write up my opinions
+
also check my project ideas repo
** writing some sort of overview of my beliefs regarding AI safety. like, if i was explaining things from scratch to someone, what would that sound like?
+
 
** my current take on AI timelines
+
==Writing up my opinions==
** my current take on AI takeoff
+
 
** my current take on MIRI vs Paul
+
* writing some sort of overview of my beliefs regarding AI safety. like, if i was explaining things from scratch to someone, what would that sound like?
* Research projects
+
* my current take on AI timelines
** continue working out AI takeoff disagreements
+
* my current take on AI takeoff
** continue working out MIRI vs Paul
+
* my current take on MIRI vs Paul
** HRAD paper with David Manheim
+
 
** concrete plausible scenarios for what could happen when AGI comes around
+
==Research projects==
 +
 
 +
* continue working out AI takeoff disagreements
 +
* continue working out MIRI vs Paul
 +
* HRAD paper with David Manheim
 +
* concrete plausible scenarios for what could happen when AGI comes around
 +
* deep dive into human evolution to figure out what the heck it might tell us about AI takeoff/AI timelines
 +
 
 +
==AI safety wiki==
 +
 
 
* Writing articles for AI safety wiki
 
* Writing articles for AI safety wiki
* Exposition
+
 
** Solomonoff induction guide (I think I've already figured out things here that are not explained anywhere, so I think I could write the best guide on it, but it's not clear how important this is for people to understand)
+
==AI Watch==
** pearl belief propagation guide
+
 
** Summarizing/distilling work that has been done in decision theory
+
* stuff mayn
 +
 
 +
==Increase activity on LessWrong==
 +
 
 
* Ask lots of questions on LW
 
* Ask lots of questions on LW
* Learning/research
+
 
** deep dive into human evolution to figure out what the heck it might tell us about AI takeoff/AI timelines
+
==Exposition of technical topics==
* Personal
+
 
** reflection post about getting involved with AI safety
+
* Solomonoff induction guide (I think I've already figured out things here that are not explained anywhere, so I think I could write the best guide on it, but it's not clear how important this is for people to understand)
 +
* pearl belief propagation guide
 +
* Summarizing/distilling work that has been done in decision theory
 +
 
 +
==Personal reflection==
 +
 
 +
* reflection post about getting involved with AI safety
  
 
[[Category:AI safety]]
 
[[Category:AI safety]]

Revision as of 22:44, 9 November 2020

(November 2020)

also check my project ideas repo

Writing up my opinions

  • writing some sort of overview of my beliefs regarding AI safety. like, if i was explaining things from scratch to someone, what would that sound like?
  • my current take on AI timelines
  • my current take on AI takeoff
  • my current take on MIRI vs Paul

Research projects

  • continue working out AI takeoff disagreements
  • continue working out MIRI vs Paul
  • HRAD paper with David Manheim
  • concrete plausible scenarios for what could happen when AGI comes around
  • deep dive into human evolution to figure out what the heck it might tell us about AI takeoff/AI timelines

AI safety wiki

  • Writing articles for AI safety wiki

AI Watch

  • stuff mayn

Increase activity on LessWrong

  • Ask lots of questions on LW

Exposition of technical topics

  • Solomonoff induction guide (I think I've already figured out things here that are not explained anywhere, so I think I could write the best guide on it, but it's not clear how important this is for people to understand)
  • pearl belief propagation guide
  • Summarizing/distilling work that has been done in decision theory

Personal reflection

  • reflection post about getting involved with AI safety