Simple core of consequentialist reasoning

From Issawiki
Revision as of 00:30, 19 February 2020 by Issa (talk | contribs)
Jump to: navigation, search

"the MIRI intuition that there is a small core of good consequentialist reasoning that is important for AI capabilities and that can be discovered through theoretical research." https://agentfoundations.org/item?id=1220

and Nate's comment https://agentfoundations.org/item?id=1228

i guess this is about the same sort of thing (but maybe not exactly the same): https://www.greaterwrong.com/posts/suxvE2ddnYMPJN9HD/realism-about-rationality

saying there is a simple core of agency/consequentialist reasoning/rationality is not the same thing as saying that AGI will be simple (rather than messy), i think. there could be modules of the AGI (like for image recognition, language) that can be very messy, but where the core agency module is still simple.

See also