Search results

Jump to: navigation, search
  • ...(all of our other problems are so pressing that we're willing to gamble on AI working out by default). I don't think this argument makes much sense. ...at’s one of the reasons why I’m focusing on AI safety, rather than bio-safety.</p>
    1 KB (261 words) - 20:02, 23 June 2021
  • ...f>[https://forum.effectivealtruism.org/users/richard_ngo richard_ngo]. "AI safety research engineer at DeepMind (all opinions my own, not theirs). I'm from N [[Category:AI safety]]
    584 bytes (88 words) - 19:11, 27 February 2021
  • Incremental reading provides feeling of emotional safety (which is something that [[Anki]] does in general, but where I think increm feeling like i should maybe ankify some of my ai safety reading from LW, but it's been hard to think of what to even put in. some t
    4 KB (687 words) - 01:10, 17 July 2021
  • ...eal. This is possible with math, but i'm not sure how to do this with [[AI safety]] (it's not like there's problems i can solve).
    8 KB (1,497 words) - 00:01, 2 August 2021
  • ...architecture''' is used to mean ... something like the basic design of the AI system (like what kind of machine learning is being used in what way, what ..."mental architecture", "cognitive architecture", the "architecture of the AI"
    7 KB (1,128 words) - 23:18, 23 June 2020
  • [[Category:AI safety]]
    1 KB (170 words) - 06:54, 3 June 2020
  • * [[Counterfactual of dropping a seed AI into a world without other capable AI]] [[Category:AI safety]]
    1 KB (180 words) - 09:49, 6 May 2020
  • ...alism about rationality''' is a topic of debate among people working on AI safety. The "something like" refers to the fact that the very topic of ''what the ...uct to achieve an agreed-upon aim, namely helping to detect/fix/ensure the safety of AGI systems.)
    7 KB (1,110 words) - 20:24, 26 June 2020
  • ...nd its ability to solve alignment problems (i.e. design better ''aligned'' AI systems). ...r is imagining some big leap/going from just humans to suddenly superhuman AI, whereas paul is imagining a more smooth transition that powers his optimis
    3 KB (477 words) - 00:01, 30 May 2020
  • ...t [[missing gear]]'/'one wrong number' dynamic, AND each insight makes the AI a little better), then you can't specialize in "intelligence". [[Category:AI safety]]
    577 bytes (96 words) - 23:01, 6 July 2020
  • ...the AI will not hit the human timescale keyhole." From our perspective, an AI will either be so slow as to be bottlenecked, or so fast as to be FOOM. Whe ...challenge time in advance, rather than challenging at a point where their AI seemed just barely good enough, it was improbable that they'd make *exactly
    4 KB (728 words) - 16:56, 24 June 2020
  • [[Category:AI safety]]
    1 KB (130 words) - 19:55, 31 May 2021
  • [[Category:AI safety]]
    61 bytes (8 words) - 01:10, 18 May 2020
  • Self-studying all of the technical prerequisites for [[technical AI safety research]] is hard. The most that people new to the field get is a list of ...pessimism: If hiring capacity is limited at AI safety orgs and mainstream AI orgs only want to hire ML PhDs then new people entering the field will basi
    3 KB (447 words) - 18:34, 18 July 2021
  • * [[AI safety technical pipeline does not teach how to start having novel thoughts]] * [[AI safety is not a community]]
    931 bytes (138 words) - 01:30, 20 May 2020
  • Currently, the [[AI safety community]] does not have an explicit mechanism for teaching new people how [[Category:AI safety meta]]
    1 KB (225 words) - 02:28, 28 March 2021
  • ...lly, I think I've been in communities before, and being a part of the [[AI safety community]] does not feel like that. * AI safety has left the "hobbyist stage". People can actually now get paid to think ab
    894 bytes (159 words) - 02:28, 28 March 2021
  • I think the [[AI safety community]] and [[effective altruism]] in general has some mixed messaging [[Category:AI safety meta]]
    866 bytes (142 words) - 20:38, 18 May 2020
  • AI safety has a weird dynamic going on where: * There are discussions of things like AI timelines, assumptions of various technical agendas, etc., which reveals th
    3 KB (435 words) - 02:38, 28 March 2021
  • [[Category:AI safety meta]]
    1 KB (198 words) - 02:28, 28 March 2021
  • [[Category:AI safety meta]]
    283 bytes (42 words) - 21:27, 18 May 2020
  • * I'm not sure about the value of explaining things better in AI safety in general: it seems like this would significantly lower the bar to entry ( ...ing for "this is the most complete and coherent curriculum of technical AI safety learning in the world". I actually think it isn't too hard to just cobble t
    1 KB (218 words) - 21:33, 18 May 2020
  • ...nice thank you letter from a different person. In contrast, working on AI safety feels like .... there's absolutely no feedback on whether I'm doing anythin When you've been at AI safety for too long, you're so used to just "[staring] at a blank sheet of paper u
    1 KB (190 words) - 02:28, 28 March 2021
  • ...g else to work on AI safety''. Cue all the rationalists who "believe in AI safety" but don't do anything about it. ..., and the people who are actually spending their full-time attention on AI safety? I think this is a very important question, and I don't think anybody under
    2 KB (268 words) - 02:35, 28 March 2021
  • [[Category:AI safety meta]]
    1 KB (180 words) - 02:32, 28 March 2021
  • ...tty hard to find people openly complaining about how to get involved in AI safety. You can find some random comments, and there are occasional Facebook threa # For people who want to do technical AI safety research, how are they deciding between MIRI vs Paul vs other technical age
    1 KB (195 words) - 02:35, 28 March 2021
  • [[Category:AI safety meta]]
    188 bytes (31 words) - 02:34, 28 March 2021
  • ...ities that make a person better: has spent a lot of time thinking about AI safety (or is willing to spend a lot of time to catch up), not afraid to dig into ...e there is no consensus about what is "good"; both of these are true in AI safety
    602 bytes (108 words) - 02:34, 28 March 2021
  • I think one of the reasons that [[AI safety is not a community]] is that it's difficult to find these deep connections. [[Category:AI safety meta]]
    1 KB (195 words) - 02:35, 28 March 2021
  • ...rough existing evidence and getting better evidence (e.g. from progress in AI research), rather than due to less virtuous reasons like groupthink/prestig [[Category:AI safety meta]]
    910 bytes (138 words) - 02:35, 28 March 2021
  • ...[[Counterfactual of dropping a seed AI into a world without other capable AI]] [[Category:AI safety]]
    448 bytes (72 words) - 07:27, 20 May 2020
  • * content sharing rarely happens in AI [[Category:AI safety]]
    229 bytes (33 words) - 07:59, 20 May 2020
  • ...en with AI as well: that there is some sort of core insight that allows an AI to suddenly have much more control over the world, rather than gaining capa [[Category:AI safety]]
    1 KB (166 words) - 07:09, 15 June 2021
  • ...is also important for understanding Eliezer's view about what progress in AI looks like. see https://www.lesswrong.com/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress#The_Conceptual_Arguments
    2 KB (269 words) - 07:11, 17 June 2020
  • ==Resource overhang and AI takeoff== ...so.<ref>https://www.greaterwrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang</ref> See [[scaling hypothesis]].
    867 bytes (132 words) - 03:19, 24 February 2021
  • [[Category:AI safety]]
    329 bytes (40 words) - 20:58, 27 July 2020
  • '''Corrigibility''' is a term used in AI safety with multiple/unclear meanings. I think the term was originally used by [[MIRI]] to mean something like an AI that allowed human programmers to shut it off.
    773 bytes (110 words) - 23:29, 8 November 2021
  • ...about how "complete axiomatic descriptions" haven't been useful so far in AI, and how they aren't used to describe machine learning systems ...et at an easier spot by MIRI: "Techniques you can actually adapt in a safe AI, come the day, will probably have very simple cores — the sort of core co
    3 KB (480 words) - 20:17, 26 June 2020
  • [[List of disagreements in AI safety#Highly reliable agent designs]] [[Category:AI safety]]
    342 bytes (56 words) - 06:21, 27 May 2020
  • ...ts.stackexchange.com/users/273265/riceissa?tab=questions</ref><ref>https://ai.stackexchange.com/users/33930/riceissa?tab=questions</ref><ref>https://biol [[Category:AI safety meta]]
    1 KB (159 words) - 02:36, 28 March 2021
  • [[Category:AI safety meta]]
    428 bytes (69 words) - 02:39, 28 March 2021
  • '''AI timelines''' refers to the question of when we will see advanced AI technology. For now, see [[List of disagreements in AI safety#AI timelines]]
    691 bytes (93 words) - 01:36, 5 April 2021
  • * early advanced AI systems will be understandable in terms of HRAD's formalisms [https://eafor * helps AGI programmers fix problems in early advanced AI systems
    4 KB (521 words) - 20:18, 26 June 2020
  • ...big breakthrough? !! Nature of final piece !! Found by humans or found by AI? !! Length of lead time prior to final piece !! Number of pieces !! Explana ...essarily || Restricts the final piece to be about understanding, where the AI goes from "not understanding" to "understanding" something. ||
    2 KB (329 words) - 21:16, 9 June 2020
  • ...gence]], the missing gear argument does not require that the final part of AI development be a huge conceptual breakthrough; instead, the final piece is ...lly helpful (because it can finally automate some particular part of doing AI research). Then the first project that gets to that point can suddenly grow
    8 KB (1,275 words) - 21:42, 30 June 2020
  • ...icization of AI''' is a hypothetical concern that discussions about AI and AI risk will become politicized/politically polarized, similar to how discussi ...://www.greaterwrong.com/posts/x4tyb9di28b4n9EE2/trying-for-five-minutes-on-ai-strategy/comment/aPqz3GBypvreaMqnT
    1 KB (194 words) - 22:06, 11 July 2021
  • ...the '''extrapolation argument for AI timelines''' takes existing trends in AI progress as well as some cutoff level for AGI to extrapolate when we will g "How capable are the best AI systems today compared to AGI? At the current rate of progress, how long wi
    933 bytes (143 words) - 00:19, 12 July 2021
  • ...adual: before there is an AI that is good at self-improvement, there is an AI that is somewhat good at self-improvement, and so on. ...e moving fast already."<ref>https://meteuphoric.com/2009/10/16/how-far-can-ai-jump/</ref>
    3 KB (471 words) - 17:15, 24 June 2020
  • [[Category:AI safety]]
    1 KB (203 words) - 21:30, 30 June 2020
  • ...o all kinds of incremental technological precursors to AlphaGo in terms of AI technology, but they wouldn't be smooth precursors on a graph of Go-playing ...ugh before they make it out to the world? this question matters mostly for AI systems that produce a large qualitative shift in how good they are (like a
    3 KB (456 words) - 23:35, 2 August 2022

View (previous 50 | next 50) (20 | 50 | 100 | 250 | 500)