Difference between revisions of "List of AI safety projects I could work on"

From Issawiki
Jump to: navigation, search
(Writing up my opinions)
Line 1: Line 1:
 
(November 2020)
 
(November 2020)
 +
 +
small chunk = I can make useful/meaningful progress within one hour (at the limit, think of things like [[Duolingo]] where each task can be accomplished in a few seconds)
 +
 +
big chunk = I cannot make useful/meaningful progress within hour, because it needs more concentrated thinking time
  
 
==Writing up my opinions==
 
==Writing up my opinions==
Line 10: Line 14:
 
==Research projects==
 
==Research projects==
  
* continue working out [[AI takeoff]] disagreements
+
* continue working out [[AI takeoff]] disagreements (big chunk)
* continue working out MIRI vs Paul
+
* continue working out MIRI vs Paul (big chunk)
* HRAD paper with [[David Manheim]]
+
* HRAD paper with [[David Manheim]] (big chunk)
* concrete plausible scenarios for what could happen when AGI comes around
+
* concrete plausible scenarios for what could happen when AGI comes around (big chunk)
* deep dive into human evolution to figure out what the heck it might tell us about AI takeoff/AI timelines
+
* deep dive into human evolution to figure out what the heck it might tell us about AI takeoff/AI timelines (big chunk)
* comparison of AI and nukes
+
* comparison of AI and nukes (big chunk)
* think about [[AI polarization]], using examples like COVID and climate change
+
* think about [[AI polarization]], using examples like COVID and climate change (big chunk)
* write up my understanding of people's views on [[whole brain emulation]]
+
* write up my understanding of people's views on [[whole brain emulation]] (big chunk)
  
 
==Technical AI safety==
 
==Technical AI safety==
  
* I still don't feel motivated to do this since I don't feel convinced by any existing worldview/visualization that has been put forth (by e.g. MIRI or Paul)
+
* I still don't feel motivated to do this since I don't feel convinced by any existing worldview/visualization that has been put forth (by e.g. MIRI or Paul) (big chunk)
  
 
==AI safety wiki==
 
==AI safety wiki==
  
* Writing articles for AI safety wiki
+
* Writing articles for AI safety wiki (big chunk)
  
 
==The Vipul Strategy==
 
==The Vipul Strategy==
  
* Expanding [[AI Watch]] to cover agendas, documents, more people/orgs with more columns filled in, graphs, automated [https://aiwatch.issarice.com/compare.php?by=organization&for=2020 yearly review] of orgs, etc.
+
* Expanding [[AI Watch]] to cover agendas, documents, more people/orgs with more columns filled in, graphs, automated [https://aiwatch.issarice.com/compare.php?by=organization&for=2020 yearly review] of orgs, etc. (chunk size depends on task -- adding data is small chunk but bigger strategic additions are big chunk)
* Updates to timeline of AI safety and other relevant timelines
+
* Updates to timeline of AI safety and other relevant timelines (small chunk)
* Timeline of [[Eliezer Yudkowsky]] publications https://github.com/riceissa/project-ideas/issues/16
+
* Timeline of [[Eliezer Yudkowsky]] publications https://github.com/riceissa/project-ideas/issues/16 (small chunk)
* Wikipedia pages for AGI projects https://github.com/riceissa/project-ideas/issues/22
+
* Wikipedia pages for AGI projects https://github.com/riceissa/project-ideas/issues/22 (big chunk)
  
 
==Increase activity on LessWrong==
 
==Increase activity on LessWrong==
  
* Ask lots of questions on LW, e.g.:
+
* Ask lots of questions on LW, e.g.: (chunk size depends on question; I believe I can rush through and ask certain questions within an hour)
 
** Applicability of FDT/UDT/TDT to everyday life https://github.com/riceissa/project-ideas/issues/44
 
** Applicability of FDT/UDT/TDT to everyday life https://github.com/riceissa/project-ideas/issues/44
  
 
==Exposition of technical topics==
 
==Exposition of technical topics==
  
* [[Solomonoff induction]] guide (I think I've already figured out things here that are not explained anywhere, so I think I could write the best guide on it, but it's not clear how important this is for people to understand)
+
* [[Solomonoff induction]] guide (I think I've already figured out things here that are not explained anywhere, so I think I could write the best guide on it, but it's not clear how important this is for people to understand) (big chunk)
* [[Judea Pearl|Pearl]] [[belief propagation]] guide
+
* [[Judea Pearl|Pearl]] [[belief propagation]] guide (big chunk)
* Summarizing/distilling work that has been done in [[decision theory]]
+
* Summarizing/distilling work that has been done in [[decision theory]] (big chunk)
* starting something like [[RAISE]]; see [[There is room for something like RAISE]]
+
* starting something like [[RAISE]]; see [[There is room for something like RAISE]] (big chunk)
  
 
==Personal reflection==
 
==Personal reflection==
  
* reflection post about getting involved with AI safety
+
* reflection post about getting involved with AI safety (big chunk)
  
 
==Philosophy==
 
==Philosophy==
  
* human values/[[deliberation]]
+
* human values/[[deliberation]] (big chunk)
  
 
==Learn stuff==
 
==Learn stuff==
  
* learn more machine learning so I can better follow some discussions
+
* learn more machine learning so I can better follow some discussions (big chunk)
* learn more economics. I somewhat often feel confused about how to think about various AI strategy questions (e.g. history of growth, how AI systems might cooperate and increase economies of scale, what percentage of GDP might be used for computing costs) and I suspect part of the reason is that I don't know enough economics.
+
* learn more economics. I somewhat often feel confused about how to think about various AI strategy questions (e.g. history of growth, how AI systems might cooperate and increase economies of scale, what percentage of GDP might be used for computing costs) and I suspect part of the reason is that I don't know enough economics. (big chunk)
* understand the [[neuromorphic AI]] pathways that people like [[Steve Byrnes]] and [[gwern]] have been talking about
+
* understand the [[neuromorphic AI]] pathways that people like [[Steve Byrnes]] and [[gwern]] have been talking about (small chunk)
* there are a lot of blog posts that I could catch up on
+
* there are a lot of blog posts that I could catch up on (small chunk)
  
 
==See also==
 
==See also==

Revision as of 00:59, 15 November 2020

(November 2020)

small chunk = I can make useful/meaningful progress within one hour (at the limit, think of things like Duolingo where each task can be accomplished in a few seconds)

big chunk = I cannot make useful/meaningful progress within hour, because it needs more concentrated thinking time

Writing up my opinions

  • writing some sort of overview of my beliefs regarding AI safety. like, if i was explaining things from scratch to someone, what would that sound like? (big chunk)
  • my current take on AI timelines (big chunk)
  • my current take on AI takeoff (big chunk)
  • my current take on MIRI vs Paul (big chunk)

Research projects

  • continue working out AI takeoff disagreements (big chunk)
  • continue working out MIRI vs Paul (big chunk)
  • HRAD paper with David Manheim (big chunk)
  • concrete plausible scenarios for what could happen when AGI comes around (big chunk)
  • deep dive into human evolution to figure out what the heck it might tell us about AI takeoff/AI timelines (big chunk)
  • comparison of AI and nukes (big chunk)
  • think about AI polarization, using examples like COVID and climate change (big chunk)
  • write up my understanding of people's views on whole brain emulation (big chunk)

Technical AI safety

  • I still don't feel motivated to do this since I don't feel convinced by any existing worldview/visualization that has been put forth (by e.g. MIRI or Paul) (big chunk)

AI safety wiki

  • Writing articles for AI safety wiki (big chunk)

The Vipul Strategy

Increase activity on LessWrong

Exposition of technical topics

Personal reflection

  • reflection post about getting involved with AI safety (big chunk)

Philosophy

Learn stuff

  • learn more machine learning so I can better follow some discussions (big chunk)
  • learn more economics. I somewhat often feel confused about how to think about various AI strategy questions (e.g. history of growth, how AI systems might cooperate and increase economies of scale, what percentage of GDP might be used for computing costs) and I suspect part of the reason is that I don't know enough economics. (big chunk)
  • understand the neuromorphic AI pathways that people like Steve Byrnes and gwern have been talking about (small chunk)
  • there are a lot of blog posts that I could catch up on (small chunk)

See also