There is room for something like RAISE

From Issawiki
Jump to: navigation, search

Self-studying all of the technical prerequisites for technical AI safety research is hard. The most that people new to the field get is a list of textbooks. I think there is room for something like what RAISE was trying to become: some sort of community/detailed resource/support structure/etc for people studying this stuff.

Reasons for pessimism: If hiring capacity is limited at AI safety orgs and mainstream AI orgs only want to hire ML PhDs then new people entering the field will basically get all the necessary training in school.

Here are some more concrete ideas:

  • Detailed solutions for all of the prerequisite math books, e.g. for the ones listed at [1]. I've started on one example of this at [2] (though I'm writing that blog for other reasons as well). You might wonder, why not Stack Exchange or Quora or something? I already do this, but online question-answering services are unreliable, and this unreliability makes it emotionally taxing to write up questions.
  • A network of tutors or people who have already worked through a particular book, where you can ask them questions in a really low friction and high probability of getting a response way. A minimal implementation of this is to have a single tutor focusing on training/helping AI safety people, e.g. tutoring to get them quickly up to speed in some undergraduate subfield, or helping them digest "Logical Induction". this requires a kind of ADHD/"living library" mindset. See also Asynchronous support
  • Grading network for AI safety people self-studying math. It would be great if people could "own" specific problems of specific math textbooks, and be a contact person for that problem. Then, if you get stuck on something, you can email or somehow contact them to get (1) a model solution, (2) grading/checking over your proof, (3) discussions about the problem, about interesting near-by ideas, etc.
  • Writing up actually good explanations for things like Solomonoff induction, belief propagation, Markov chain Monte Carlo, etc. Belief propagation in Pearl's book is actually ok (except for the horrible notation), but as Abram says it doesn't really tell you all the connections you need to know for rationality/AI safety stuff.
  • Redpilling people about spaced repetition and other effective learning techniques.
  • wiki pages on specific papers might also be useful

See also

References


What links here