Difference between revisions of "There is room for something like RAISE"

From Issawiki
Jump to: navigation, search
Line 3: Line 3:
 
Here are some more concrete ideas:
 
Here are some more concrete ideas:
  
* Detailed solutions for all of the prerequisite math books, e.g. for the ones listed at [https://intelligence.org/research-guide/]. I've started on one example of this at [https://taoanalysis.wordpress.com/] (though I'm writing that blog for other reasons as well). You might wonder, why not Stack Exchange or Quora or something? I already do this, but [[online question-answering services are unreliable]]. and actually this feeling of uncertainty, that I could spend time writing up my question only to be completely ignored, is one of the big reasons why I don't post more questions.
+
* Detailed solutions for all of the prerequisite math books, e.g. for the ones listed at [https://intelligence.org/research-guide/]. I've started on one example of this at [https://taoanalysis.wordpress.com/] (though I'm writing that blog for other reasons as well). You might wonder, why not Stack Exchange or Quora or something? I already do this, but [[online question-answering services are unreliable]], and this [[unreliability of online question-answering services makes it emotionally taxing to write up questions]].
 
* A network of tutors or people who have already worked through a particular book, where you can ask them questions in a really ''low friction'' and ''high probability of getting a response'' way. A minimal implementation of this is to have a single tutor focusing on training/helping AI safety people, e.g. tutoring to get them quickly up to speed in some undergraduate subfield, or helping them digest "[[Logical Induction]]". this requires a kind of ADHD/"living library" mindset.
 
* A network of tutors or people who have already worked through a particular book, where you can ask them questions in a really ''low friction'' and ''high probability of getting a response'' way. A minimal implementation of this is to have a single tutor focusing on training/helping AI safety people, e.g. tutoring to get them quickly up to speed in some undergraduate subfield, or helping them digest "[[Logical Induction]]". this requires a kind of ADHD/"living library" mindset.
 
* Writing up actually good explanations for things like [[Solomonoff induction]], [[belief propagation]], [[Markov chain Monte Carlo]], etc. Belief propagation in Pearl's book is actually ok (except for the [https://machinelearning.subwiki.org/wiki/User:IssaRice/Type_checking_Pearl%27s_belief_propagation_notation horrible notation]), but as [[Abram]] [https://www.lesswrong.com/posts/tp4rEtQqRshPavZsr/learn-bayes-nets says] it doesn't really tell you all the connections you need to know for rationality/AI safety stuff.
 
* Writing up actually good explanations for things like [[Solomonoff induction]], [[belief propagation]], [[Markov chain Monte Carlo]], etc. Belief propagation in Pearl's book is actually ok (except for the [https://machinelearning.subwiki.org/wiki/User:IssaRice/Type_checking_Pearl%27s_belief_propagation_notation horrible notation]), but as [[Abram]] [https://www.lesswrong.com/posts/tp4rEtQqRshPavZsr/learn-bayes-nets says] it doesn't really tell you all the connections you need to know for rationality/AI safety stuff.

Revision as of 22:34, 29 May 2020

Self-studying all of the technical prerequisites for technical AI safety research is hard. The most that people new to the field get is a list of textbooks. I think there is room for something like what RAISE was trying to become: some sort of community/detailed resource/support structure/etc for people studying this stuff.

Here are some more concrete ideas:

See also

References