Simple core of consequentialist reasoning

From Issawiki
Jump to: navigation, search

"the MIRI intuition that there is a small core of good consequentialist reasoning that is important for AI capabilities and that can be discovered through theoretical research." https://agentfoundations.org/item?id=1220

and Nate's comment https://agentfoundations.org/item?id=1228

i guess this is about the same sort of thing (but maybe not exactly the same): https://www.greaterwrong.com/posts/suxvE2ddnYMPJN9HD/realism-about-rationality

saying there is a simple core of agency/consequentialist reasoning/rationality is not the same thing as saying that AGI will be simple (rather than messy), i think. there could be modules of the AGI (like for image recognition, language) that can be very messy, but where the core agency module is still simple.

see also discussions in https://arbital.com/p/general_intelligence/ about how there is a spectrum you can be on between "everything will need super specialized algorithms" vs "there is such a thing as 'general intelligence', that once you have it, you can basically do a bunch of things you weren't specifically programmed for". this isn't exactly the same question though; it's possible to have general intelligence, but where this general intelligence comes from a bunch of messy stuff rather than a simple core algorithm.

See also