Difference between revisions of "There is room for something like RAISE"

From Issawiki
Jump to: navigation, search
 
(11 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
Self-studying all of the technical prerequisites for [[technical AI safety research]] is hard. The most that people new to the field get is a list of textbooks. I think there is room for something like what [[RAISE]] was trying to become: some sort of community/detailed resource/support structure/etc for people studying this stuff.
 
Self-studying all of the technical prerequisites for [[technical AI safety research]] is hard. The most that people new to the field get is a list of textbooks. I think there is room for something like what [[RAISE]] was trying to become: some sort of community/detailed resource/support structure/etc for people studying this stuff.
 +
 +
Reasons for pessimism: If hiring capacity is limited at AI safety orgs and mainstream AI orgs only want to hire ML PhDs then new people entering the field will basically get all the necessary training in school.
  
 
Here are some more concrete ideas:
 
Here are some more concrete ideas:
  
* Detailed solutions for all of the prerequisite math books, e.g. for the ones listed at [https://intelligence.org/research-guide/]. I've started on one example of this at [https://taoanalysis.wordpress.com/] (though I'm writing that blog for other reasons as well).
+
* Detailed solutions for all of the prerequisite math books, e.g. for the ones listed at [https://intelligence.org/research-guide/]. I've started on one example of this at [https://taoanalysis.wordpress.com/] (though I'm writing that blog for other reasons as well). You might wonder, why not Stack Exchange or Quora or something? I already do this, but [[online question-answering services are unreliable]], and [[Unreliability of online question-answering services makes it emotionally taxing to write up questions|this unreliability makes it emotionally taxing to write up questions]].
* A network of tutors or people who have already worked through a particular book, where you can ask them questions in a really ''low friction'' way. A minimal implementation of this is to have a single tutor focusing on training/helping AI safety people, e.g. tutoring to get them quickly up to speed in some undergraduate subfield, or helping them digest "[[Logical Induction]]". this requires a kind of ADHD/"living library" mindset.
+
* A network of tutors or people who have already worked through a particular book, where you can ask them questions in a really ''low friction'' and ''high probability of getting a response'' way. A minimal implementation of this is to have a single tutor focusing on training/helping AI safety people, e.g. tutoring to get them quickly up to speed in some undergraduate subfield, or helping them digest "[[Logical Induction]]". this requires a kind of ADHD/"living library" mindset. See also [[Asynchronous support]]
* Writing up actually good explanations for things like [[Solomonoff induction]], [[belief propagation]], [[Markov chain Monte Carlo]], etc.
+
* Grading network for AI safety people self-studying math. It would be great if people could "own" specific problems of specific math textbooks, and be a contact person for that problem. Then, if you get stuck on something, you can email or somehow contact them to get (1) a model solution, (2) grading/checking over your proof, (3) discussions about the problem, about interesting near-by ideas, etc.
 +
* Writing up actually good explanations for things like [[Solomonoff induction]], [[belief propagation]], [[Markov chain Monte Carlo]], etc. Belief propagation in Pearl's book is actually ok (except for the [https://machinelearning.subwiki.org/wiki/User:IssaRice/Type_checking_Pearl%27s_belief_propagation_notation horrible notation]), but as [[Abram]] [https://www.lesswrong.com/posts/tp4rEtQqRshPavZsr/learn-bayes-nets says] it doesn't really tell you all the connections you need to know for rationality/AI safety stuff.
 
* Redpilling people about [[spaced repetition]] and other effective learning techniques.
 
* Redpilling people about [[spaced repetition]] and other effective learning techniques.
 
* wiki pages on specific papers might also be useful
 
* wiki pages on specific papers might also be useful
 +
 +
==See also==
 +
 +
* [[My take on RAISE]]
 +
 +
==References==
 +
 +
<references/>
 +
 +
==What links here==
 +
 +
{{Special:WhatLinksHere/{{FULLPAGENAME}} | hideredirs=1}}
  
 
[[Category:AI safety meta]]
 
[[Category:AI safety meta]]
 +
[[Category:RAISE]]
 +
[[Category:Claim]]

Latest revision as of 18:34, 18 July 2021

Self-studying all of the technical prerequisites for technical AI safety research is hard. The most that people new to the field get is a list of textbooks. I think there is room for something like what RAISE was trying to become: some sort of community/detailed resource/support structure/etc for people studying this stuff.

Reasons for pessimism: If hiring capacity is limited at AI safety orgs and mainstream AI orgs only want to hire ML PhDs then new people entering the field will basically get all the necessary training in school.

Here are some more concrete ideas:

  • Detailed solutions for all of the prerequisite math books, e.g. for the ones listed at [1]. I've started on one example of this at [2] (though I'm writing that blog for other reasons as well). You might wonder, why not Stack Exchange or Quora or something? I already do this, but online question-answering services are unreliable, and this unreliability makes it emotionally taxing to write up questions.
  • A network of tutors or people who have already worked through a particular book, where you can ask them questions in a really low friction and high probability of getting a response way. A minimal implementation of this is to have a single tutor focusing on training/helping AI safety people, e.g. tutoring to get them quickly up to speed in some undergraduate subfield, or helping them digest "Logical Induction". this requires a kind of ADHD/"living library" mindset. See also Asynchronous support
  • Grading network for AI safety people self-studying math. It would be great if people could "own" specific problems of specific math textbooks, and be a contact person for that problem. Then, if you get stuck on something, you can email or somehow contact them to get (1) a model solution, (2) grading/checking over your proof, (3) discussions about the problem, about interesting near-by ideas, etc.
  • Writing up actually good explanations for things like Solomonoff induction, belief propagation, Markov chain Monte Carlo, etc. Belief propagation in Pearl's book is actually ok (except for the horrible notation), but as Abram says it doesn't really tell you all the connections you need to know for rationality/AI safety stuff.
  • Redpilling people about spaced repetition and other effective learning techniques.
  • wiki pages on specific papers might also be useful

See also

References


What links here