AI safety technical pipeline does not teach how to start having novel thoughts
Currently, the AI safety community does not have an explicit mechanism for teaching new people how to start having novel thoughts. The implicit hope seems to be something like "read these books, read these posts, participate in discussion, then hope you can think real good by osmosis".
Here's the most that's been done I think:
- https://www.greaterwrong.com/posts/PqMT9zGrNsGJNfiFR/alignment-research-field-guide
- https://www.greaterwrong.com/posts/NfdHG6oHBJ8Qxc26s/the-zettelkasten-method-1
- https://www.greaterwrong.com/posts/XYYyzgyuRH5rFN64K/what-makes-people-intellectually-active
I think it's not a fluke that all three have been written by Abram Demski; he seems to be literally the only person thinking about this stuff and writing about it!
Part of the problem might be that newcomers can't distinguish crackpots from geniuses.
I think there's way too little support structure in place for a new person to come along and actually do something useful. If they stick around, any work they do on agent foundations-type work will probably be useless.
What can be done about this? Some ideas:
- I think a list of easy open problems, similar to "good issue for newcomers" in places like GitHub, or similar to the kinds of problems PhD advisors give to their 1st or 2nd year graduate students, would be good.
- something something mentorship (I have no solutions here)