Search results

Jump to: navigation, search
  • [[Category:AI safety]]
    815 bytes (133 words) - 19:18, 27 February 2021
  • [[Category:AI safety]]
    212 bytes (21 words) - 19:10, 27 February 2021
  • [[Category:AI safety]]
    14 KB (2,432 words) - 09:13, 8 January 2023
  • ...ctually better? if AGI is developed in 200 years, what does this say about ai xrisk? this could happen for several reasons: ...l be much harder than building agi", then this might push you to think "ai safety is basically impossible for humans without intelligence enhancement to solv
    1 KB (248 words) - 21:33, 4 April 2021
  • '''AI prepping''' refers to selfish actions one can take in order to survive when It's not clear whether any really good actions for AI prepping exist. Some reasons for optimism are:
    6 KB (968 words) - 04:20, 26 November 2022
  • ...ften used by [[Robin Hanson]] to describe things like innovation, secrets, AI progress, citations. ...ot lumpy. Aren't power laws lumpy? actually maybe he's only saying that if AI progress is lumpy, then its citation patterns should be even lumpier than u
    4 KB (648 words) - 06:53, 3 June 2020
  • The '''Laplace's rule of succession argument for AI timelines''' uses [[wikipedia:Rule of succession|Laplace's rule of successi ....issarice.com/posts/Ayu5im98u8FeMWoBZ/my-personal-cruxes-for-working-on-ai-safety#AI_timelines
    1 KB (218 words) - 02:04, 5 April 2021
  • ...analysis of experts' AI timelines to come up with some overall estimate of AI timelines. It punts the question of "but where did the experts get their op ..."AI researchers attending X conference", "AI researchers in general", "AI safety researchers").
    641 bytes (101 words) - 05:10, 9 April 2021
  • I think [[Eliezer]]'s point is that when there's more hardware behind an AI project, the Kasparov window is narrower. ...series of blog posts from [[AI Impacts]] https://aiimpacts.org/?s=time+for+ai+to+cross
    320 bytes (55 words) - 22:02, 4 January 2021
  • ....com/posts/6skeZgctugzBBEBw3/ai-alignment-podcast-an-overview-of-technical-ai-alignment] [[Category:AI safety]]
    746 bytes (115 words) - 20:41, 12 April 2021
  • ...(all of our other problems are so pressing that we're willing to gamble on AI working out by default). I don't think this argument makes much sense. ...at’s one of the reasons why I’m focusing on AI safety, rather than bio-safety.</p>
    1 KB (261 words) - 20:02, 23 June 2021
  • ...f>[https://forum.effectivealtruism.org/users/richard_ngo richard_ngo]. "AI safety research engineer at DeepMind (all opinions my own, not theirs). I'm from N [[Category:AI safety]]
    584 bytes (88 words) - 19:11, 27 February 2021
  • Incremental reading provides feeling of emotional safety (which is something that [[Anki]] does in general, but where I think increm feeling like i should maybe ankify some of my ai safety reading from LW, but it's been hard to think of what to even put in. some t
    4 KB (687 words) - 01:10, 17 July 2021
  • ...eal. This is possible with math, but i'm not sure how to do this with [[AI safety]] (it's not like there's problems i can solve).
    8 KB (1,497 words) - 00:01, 2 August 2021
  • ...architecture''' is used to mean ... something like the basic design of the AI system (like what kind of machine learning is being used in what way, what ..."mental architecture", "cognitive architecture", the "architecture of the AI"
    7 KB (1,128 words) - 23:18, 23 June 2020
  • [[Category:AI safety]]
    1 KB (170 words) - 06:54, 3 June 2020
  • * [[Counterfactual of dropping a seed AI into a world without other capable AI]] [[Category:AI safety]]
    1 KB (180 words) - 09:49, 6 May 2020
  • ...alism about rationality''' is a topic of debate among people working on AI safety. The "something like" refers to the fact that the very topic of ''what the ...uct to achieve an agreed-upon aim, namely helping to detect/fix/ensure the safety of AGI systems.)
    7 KB (1,110 words) - 20:24, 26 June 2020
  • ...nd its ability to solve alignment problems (i.e. design better ''aligned'' AI systems). ...r is imagining some big leap/going from just humans to suddenly superhuman AI, whereas paul is imagining a more smooth transition that powers his optimis
    3 KB (477 words) - 00:01, 30 May 2020
  • ...t [[missing gear]]'/'one wrong number' dynamic, AND each insight makes the AI a little better), then you can't specialize in "intelligence". [[Category:AI safety]]
    577 bytes (96 words) - 23:01, 6 July 2020

View (previous 20 | next 20) (20 | 50 | 100 | 250 | 500)