Simple core of consequentialist reasoning

From Issawiki
Revision as of 00:26, 19 February 2020 by Issa (talk | contribs)
Jump to: navigation, search

"the MIRI intuition that there is a small core of good consequentialist reasoning that is important for AI capabilities and that can be discovered through theoretical research." https://agentfoundations.org/item?id=1220

and Nate's comment https://agentfoundations.org/item?id=1228

See also