Simple core of consequentialist reasoning
"the MIRI intuition that there is a small core of good consequentialist reasoning that is important for AI capabilities and that can be discovered through theoretical research." https://agentfoundations.org/item?id=1220
and Nate's comment https://agentfoundations.org/item?id=1228