Different senses of claims about AGI

From Issawiki
Revision as of 23:09, 18 February 2020 by Issa (talk | contribs)
Jump to: navigation, search

when making claims about AGI like "how much compute will AGI use?" or "will AGI be clean or messy?" there are several senses/scenarios of AGI we could be talking about:

  • claims about the first AGI that will probably appear
  • claims about an ideal aligned AGI
  • claims about a theoretically possible "optimal" AGI

an example is this comment by Nate: "Indeed, if I thought one had to understand good consequentialist reasoning in order to design a highly capable AI system, I’d be less worried by a decent margin." the general MIRI view that you can get to the first AGI without really understanding anything, whereas to get an aligned AGI you do need to understand things.