Difference between revisions of "List of AI safety projects I could work on"

From Issawiki
Jump to: navigation, search
(The Vipul Strategy)
(Learn stuff)
(2 intermediate revisions by the same user not shown)
Line 4: Line 4:
  
 
* writing some sort of overview of my beliefs regarding AI safety. like, if i was explaining things from scratch to someone, what would that sound like?
 
* writing some sort of overview of my beliefs regarding AI safety. like, if i was explaining things from scratch to someone, what would that sound like?
* my current take on AI timelines
+
* my current take on [[AI timelines]]
* my current take on AI takeoff
+
* my current take on [[AI takeoff]]
 
* my current take on MIRI vs Paul
 
* my current take on MIRI vs Paul
  
 
==Research projects==
 
==Research projects==
  
* continue working out AI takeoff disagreements
+
* continue working out [[AI takeoff]] disagreements
 
* continue working out MIRI vs Paul
 
* continue working out MIRI vs Paul
* HRAD paper with David Manheim
+
* HRAD paper with [[David Manheim]]
 
* concrete plausible scenarios for what could happen when AGI comes around
 
* concrete plausible scenarios for what could happen when AGI comes around
 
* deep dive into human evolution to figure out what the heck it might tell us about AI takeoff/AI timelines
 
* deep dive into human evolution to figure out what the heck it might tell us about AI takeoff/AI timelines
Line 24: Line 24:
 
==The Vipul Strategy==
 
==The Vipul Strategy==
  
* Expanding AI Watch to cover agendas, documents, more people/orgs with more columns filled in, graphs, automated [https://aiwatch.issarice.com/compare.php?by=organization&for=2020 yearly review] of orgs, etc.
+
* Expanding [[AI Watch]] to cover agendas, documents, more people/orgs with more columns filled in, graphs, automated [https://aiwatch.issarice.com/compare.php?by=organization&for=2020 yearly review] of orgs, etc.
 
* Updates to timeline of AI safety and other relevant timelines
 
* Updates to timeline of AI safety and other relevant timelines
* Timeline of Eliezer Yudkowsky publications https://github.com/riceissa/project-ideas/issues/16
+
* Timeline of [[Eliezer Yudkowsky]] publications https://github.com/riceissa/project-ideas/issues/16
 
* Wikipedia pages for AGI projects https://github.com/riceissa/project-ideas/issues/22
 
* Wikipedia pages for AGI projects https://github.com/riceissa/project-ideas/issues/22
  
Line 36: Line 36:
 
==Exposition of technical topics==
 
==Exposition of technical topics==
  
* Solomonoff induction guide (I think I've already figured out things here that are not explained anywhere, so I think I could write the best guide on it, but it's not clear how important this is for people to understand)
+
* [[Solomonoff induction]] guide (I think I've already figured out things here that are not explained anywhere, so I think I could write the best guide on it, but it's not clear how important this is for people to understand)
* pearl belief propagation guide
+
* [[Judea Pearl|Pearl]] [[belief propagation]] guide
* Summarizing/distilling work that has been done in decision theory
+
* Summarizing/distilling work that has been done in [[decision theory]]
 
* starting something like [[RAISE]]; see [[There is room for something like RAISE]]
 
* starting something like [[RAISE]]; see [[There is room for something like RAISE]]
  
Line 47: Line 47:
 
==Philosophy==
 
==Philosophy==
  
* human values/deliberation
+
* human values/[[deliberation]]
  
 
==Learn stuff==
 
==Learn stuff==
  
 
* learn more machine learning so I can better follow some discussions
 
* learn more machine learning so I can better follow some discussions
 +
* learn more economics. I somewhat often feel confused about how to think about various AI strategy questions (e.g. history of growth, how AI systems might cooperate and increase economies of scale, what percentage of GDP might be used for computing costs) and I suspect part of the reason is that I don't know enough economics.
  
[[Category:AI safety]]
+
==See also==
 +
 
 +
* [[How meta should AI safety be?]]
 +
 
 +
[[Category:AI safety meta]]

Revision as of 01:20, 11 November 2020

(November 2020)

Writing up my opinions

  • writing some sort of overview of my beliefs regarding AI safety. like, if i was explaining things from scratch to someone, what would that sound like?
  • my current take on AI timelines
  • my current take on AI takeoff
  • my current take on MIRI vs Paul

Research projects

  • continue working out AI takeoff disagreements
  • continue working out MIRI vs Paul
  • HRAD paper with David Manheim
  • concrete plausible scenarios for what could happen when AGI comes around
  • deep dive into human evolution to figure out what the heck it might tell us about AI takeoff/AI timelines
  • comparison of AI and nukes
  • think about AI polarization, using examples like COVID and climate change

AI safety wiki

  • Writing articles for AI safety wiki

The Vipul Strategy

Increase activity on LessWrong

Exposition of technical topics

Personal reflection

  • reflection post about getting involved with AI safety

Philosophy

Learn stuff

  • learn more machine learning so I can better follow some discussions
  • learn more economics. I somewhat often feel confused about how to think about various AI strategy questions (e.g. history of growth, how AI systems might cooperate and increase economies of scale, what percentage of GDP might be used for computing costs) and I suspect part of the reason is that I don't know enough economics.

See also