Search results

Jump to: navigation, search
  • ...I safety. Resolving this is important for thinking about the shape of [[AI takeoff]]. ...mpy]]", i.e. coming in a small number of chunks that contribute greatly to AI capabilities; there are a small number of discrete insights required to cre
    13 KB (1,917 words) - 23:45, 19 May 2021
  • ...-hard takeoff" is a pretty horrible name; maybe something like "continuous takeoff + FOOM/locality" is better) ...telligence, and then once it crosses some threshold, a stereotypical "hard takeoff" happens.
    8 KB (1,206 words) - 01:43, 2 March 2021
  • ...disagreements in AI safety''' which collects the list of things people in AI safety seem to most frequently and deeply disagree about. ...d/1wI21XP-lRa6mi5h0dq_USooz0LpysdhS/view Clarifying some key hypotheses in AI alignment].</ref> (there are more posts like this, i think? find them)
    21 KB (3,254 words) - 11:00, 26 February 2022
  • ...ation of takeoff shape has more to do with the inside view details of what AI will look like, and doesn't have anything to do with whether or not the Ind ...rong.com/posts/aFaKhG86tTrKvtAnT/against-gdp-as-a-metric-for-timelines-and-takeoff-speeds
    2 KB (282 words) - 00:07, 2 March 2021
  • ...n or gradual, how quickly economic activity will accelerate after advanced AI systems appear, and so on). * [[List of disagreements in AI safety#Takeoff dynamics]]
    734 bytes (101 words) - 01:01, 5 March 2021