Difference between revisions of "Simple core of consequentialist reasoning"

From Issawiki
Jump to: navigation, search
Line 4: Line 4:
  
 
i guess this is about the same sort of thing (but maybe not exactly the same): https://www.greaterwrong.com/posts/suxvE2ddnYMPJN9HD/realism-about-rationality
 
i guess this is about the same sort of thing (but maybe not exactly the same): https://www.greaterwrong.com/posts/suxvE2ddnYMPJN9HD/realism-about-rationality
 +
 +
saying there is a simple core of agency/consequentialist reasoning/rationality is not the same thing as saying that AGI will be simple (rather than messy), i think. there could be modules of the AGI (like for image recognition, language) that can be very messy, but where the core agency module is still simple.
  
 
==See also==
 
==See also==
  
 
* [[Hardware-driven vs software-driven progress]]
 
* [[Hardware-driven vs software-driven progress]]

Revision as of 00:30, 19 February 2020

"the MIRI intuition that there is a small core of good consequentialist reasoning that is important for AI capabilities and that can be discovered through theoretical research." https://agentfoundations.org/item?id=1220

and Nate's comment https://agentfoundations.org/item?id=1228

i guess this is about the same sort of thing (but maybe not exactly the same): https://www.greaterwrong.com/posts/suxvE2ddnYMPJN9HD/realism-about-rationality

saying there is a simple core of agency/consequentialist reasoning/rationality is not the same thing as saying that AGI will be simple (rather than messy), i think. there could be modules of the AGI (like for image recognition, language) that can be very messy, but where the core agency module is still simple.

See also