Search results

Jump to: navigation, search
  • # a highly intelligent AI would see things humans cannot see, can arrive at unanticipated solution
    5 KB (765 words) - 02:32, 28 March 2021
  • ...an "Oh, it looks like surprisingly more progress was made toward generally intelligent algorithms than we thought."] ||
    5 KB (672 words) - 20:19, 11 August 2021
  • [[Eliezer]] has lots of terms to describe how intelligent/powerful/whatever an agent is. These were mostly introduced publicly in his
    782 bytes (108 words) - 20:56, 26 March 2021
  • ...t to play cooperate in one-shot PD, doesn't it imply that all sufficiently intelligent and reflective agents across all possible worlds should do a global trade a
    7 KB (1,148 words) - 06:18, 21 May 2020
  • truth-seeking seems like a convergent value for highly intelligent/advanced organisms. however, it doesn't seem to work completely, e.g. see h ...o produce intelligence that don't involve evolution? e.g. if we found some intelligent alien species, would they ''have'' to have arose from evolutionary processe
    8 KB (1,248 words) - 17:50, 9 April 2021
  • ...or metric spaces, and this is often helpful. But there is no feedback, no intelligent process that takes your work and says "you failed" or "looks good to me". T
    2 KB (414 words) - 00:51, 17 July 2021
  • ...es argument that sufficiently intelligent AI systems could become the most intelligent species, in which case humans could lose the ability to create a valuable a
    334 bytes (42 words) - 04:22, 22 October 2020
  • * [[Objective morality argument against AI safety]]: All sufficiently intelligent beings converge to some objective morality (either because [[moral realism]
    8 KB (1,245 words) - 00:29, 24 July 2022