Difference between revisions of "Different senses of claims about AGI"

From Issawiki
Jump to: navigation, search
 
(2 intermediate revisions by the same user not shown)
Line 2: Line 2:
  
 
* claims about the first AGI that will probably appear
 
* claims about the first AGI that will probably appear
 +
* claims about the most likely AGI after it has existed for a while
 
* claims about an ideal aligned AGI
 
* claims about an ideal aligned AGI
 +
* claims about an AGI built using machine learning
 
* claims about a theoretically possible "optimal" AGI
 
* claims about a theoretically possible "optimal" AGI
 +
* claims about an AGI that is likely to appear under some given assumptions
  
 
an example is [https://agentfoundations.org/item?id=1228 this comment] by [[Nate]]: "Indeed, if I thought one ''had'' to understand good consequentialist reasoning in order to design a highly capable AI system, I’d be less worried by a decent margin." the general MIRI view that you can get to the first AGI without really understanding anything, whereas to get an aligned AGI you do need to understand things.
 
an example is [https://agentfoundations.org/item?id=1228 this comment] by [[Nate]]: "Indeed, if I thought one ''had'' to understand good consequentialist reasoning in order to design a highly capable AI system, I’d be less worried by a decent margin." the general MIRI view that you can get to the first AGI without really understanding anything, whereas to get an aligned AGI you do need to understand things.
  
 
another (possibly same) example is: will understanding the rationality of "ideal" agents help us build AI systems that we understand well? (question is (2) in [https://www.greaterwrong.com/posts/suxvE2ddnYMPJN9HD/realism-about-rationality#comment-Dk5LmWMEL55ufkTB5]) one could believe this for aligned AI systems, but not believe it for unaligned/arbitrary AI systems.
 
another (possibly same) example is: will understanding the rationality of "ideal" agents help us build AI systems that we understand well? (question is (2) in [https://www.greaterwrong.com/posts/suxvE2ddnYMPJN9HD/realism-about-rationality#comment-Dk5LmWMEL55ufkTB5]) one could believe this for aligned AI systems, but not believe it for unaligned/arbitrary AI systems.
 +
 +
[[Category:AI safety]]

Latest revision as of 22:14, 28 April 2020

when making claims about AGI like "how much compute will AGI use?" or "will AGI be clean or messy?" there are several senses/scenarios of AGI we could be talking about:

  • claims about the first AGI that will probably appear
  • claims about the most likely AGI after it has existed for a while
  • claims about an ideal aligned AGI
  • claims about an AGI built using machine learning
  • claims about a theoretically possible "optimal" AGI
  • claims about an AGI that is likely to appear under some given assumptions

an example is this comment by Nate: "Indeed, if I thought one had to understand good consequentialist reasoning in order to design a highly capable AI system, I’d be less worried by a decent margin." the general MIRI view that you can get to the first AGI without really understanding anything, whereas to get an aligned AGI you do need to understand things.

another (possibly same) example is: will understanding the rationality of "ideal" agents help us build AI systems that we understand well? (question is (2) in [1]) one could believe this for aligned AI systems, but not believe it for unaligned/arbitrary AI systems.