Difference between revisions of "Simple core of consequentialist reasoning"

From Issawiki
Jump to: navigation, search
(See also)
Line 6: Line 6:
  
 
saying there is a simple core of agency/consequentialist reasoning/rationality is not the same thing as saying that AGI will be simple (rather than messy), i think. there could be modules of the AGI (like for image recognition, language) that can be very messy, but where the core agency module is still simple.
 
saying there is a simple core of agency/consequentialist reasoning/rationality is not the same thing as saying that AGI will be simple (rather than messy), i think. there could be modules of the AGI (like for image recognition, language) that can be very messy, but where the core agency module is still simple.
 +
 +
see also discussions in https://arbital.com/p/general_intelligence/ about how there is a spectrum you can be on between "everything will need super specialized algorithms" vs "there is such a thing as 'general intelligence', that once you have it, you can basically do a bunch of things you weren't specifically programmed for".
  
 
==See also==
 
==See also==

Revision as of 00:32, 19 February 2020

"the MIRI intuition that there is a small core of good consequentialist reasoning that is important for AI capabilities and that can be discovered through theoretical research." https://agentfoundations.org/item?id=1220

and Nate's comment https://agentfoundations.org/item?id=1228

i guess this is about the same sort of thing (but maybe not exactly the same): https://www.greaterwrong.com/posts/suxvE2ddnYMPJN9HD/realism-about-rationality

saying there is a simple core of agency/consequentialist reasoning/rationality is not the same thing as saying that AGI will be simple (rather than messy), i think. there could be modules of the AGI (like for image recognition, language) that can be very messy, but where the core agency module is still simple.

see also discussions in https://arbital.com/p/general_intelligence/ about how there is a spectrum you can be on between "everything will need super specialized algorithms" vs "there is such a thing as 'general intelligence', that once you have it, you can basically do a bunch of things you weren't specifically programmed for".

See also