Mixed messaging regarding independent thinking
I think the AI safety community and effective altruism in general has some mixed messaging going on regarding whether it's good to be an "independent thinker". On one hand, you will see top/elite people saying things like "we want super agenty people who can think on their own, do stuff on their own, etc. without any guidance blah blah". On the other hand, the people who actually get hired at organizations, get internships, etc., seem like super conformist cucks who haven't had a single original thought in their life. If you want agenty people, you actually have to reward that behavior instead of conformist behavior.
I think the recent (as of 2019-2020) talk about unilateralist's curse can be interpreted as the opposite of this, where elites are now saying "actually, you should be conformist cucks after all".