Simple core of consequentialist reasoning
"the MIRI intuition that there is a small core of good consequentialist reasoning that is important for AI capabilities and that can be discovered through theoretical research." https://agentfoundations.org/item?id=1220
and Nate's comment https://agentfoundations.org/item?id=1228
i guess this is about the same sort of thing (but maybe not exactly the same): https://www.greaterwrong.com/posts/suxvE2ddnYMPJN9HD/realism-about-rationality