Search results

Jump to: navigation, search
  • ...in hindsight. The logic here is that the user always knows better than the AI." [https://docs.google.com/document/d/11QGpURtFF-JFnWjkdIybW9Q-mI3fiKJjhxc0 [[Category:AI safety]]
    890 bytes (143 words) - 23:00, 26 August 2020
  • [[Category:AI safety]]
    240 bytes (29 words) - 19:02, 23 September 2020
  • ...ation of takeoff shape has more to do with the inside view details of what AI will look like, and doesn't have anything to do with whether or not the Ind ...t: This is one reason why we shouldn’t use GDP extrapolations to predict AI timelines. It’s like extrapolating global mean temperature trends into th
    2 KB (282 words) - 00:07, 2 March 2021
  • "the second species argument that sufficiently intelligent AI systems could become the most intelligent species, in which case humans cou [[Category:AI safety]]
    334 bytes (42 words) - 04:22, 22 October 2020
  • ...l systems, followed by a sudden jump to extremely capable/[[Transformative AI|transformative]] systems. Another way to phrase sudden emergence is as "a d The term was coined by [[Ben Garfinkel]] in "[[On Classic Arguments for AI Discontinuities]]".<ref>https://docs.google.com/document/d/1lgcBauWyYk774gB
    1 KB (151 words) - 23:23, 25 May 2021
  • ...n or gradual, how quickly economic activity will accelerate after advanced AI systems appear, and so on). * [[List of disagreements in AI safety#Takeoff dynamics]]
    734 bytes (101 words) - 01:01, 5 March 2021
  • This is a '''list of arguments against working on AI safety'''. Personally I think the only one that's not totally weak is opportunity ...out how to affect the long-term future. See also [[Pascal's mugging and AI safety]].
    8 KB (1,245 words) - 00:29, 24 July 2022
  • * writing some sort of overview of my beliefs regarding AI safety. like, if i was explaining things from scratch to someone, what would that * my current take on [[AI timelines]] (vacation tier)
    6 KB (927 words) - 14:25, 4 February 2022
  • I often go back and forth between the following two approaches to AI safety: ...s/conferences. Trust that making a better community will lead to better AI safety work being done.
    873 bytes (144 words) - 02:33, 28 March 2021
  • ...to [[Pascal's mugging]]. The critic of AI safety argues that working on AI safety has a very small probability of a very big payoff, which sounds suspicious. * Argue that reducing x-risk from AI safety is more like a 1% chance than like an astronomically small chance.
    1 KB (147 words) - 22:16, 17 November 2020
  • #redirect [[Pascal's mugging and AI safety]]
    44 bytes (6 words) - 23:24, 12 November 2020
  • ...er Yudkowsky]]'s [[FOOM]] scenario, arguing that the transition from early AI systems to superintelligent systems will not be so immediate, but these vie ...y; even in a [[hard takeoff]] (i.e. "discontinuous takeoff") scenario, the AI system's capability can be modeled as a continuous function.
    773 bytes (106 words) - 22:16, 1 March 2021
  • [[Category:AI safety]] [[Category:AI safety organizations]]
    105 bytes (14 words) - 19:54, 22 March 2021
  • [[Category:AI safety]]
    3 KB (415 words) - 21:51, 12 March 2021
  • ...t thing is to be able to have separate topics, like my default inbox vs ai safety vs day job.
    2 KB (305 words) - 01:17, 17 July 2021
  • ...ct, it seems difficult to tell whether we've "won" or not. For example, an AI might convincingly explain to us that things are going well even when they I think a big part of why I am more pessimistic than most people in the [[AI safety community]] is that others think detecting an "[[existential win]]" will be
    691 bytes (115 words) - 02:40, 28 March 2021
  • [[Category:AI safety]]
    305 bytes (48 words) - 01:02, 1 December 2020
  • ...as to cause the creation of DeepMind and OpenAI, and to accelerate overall AI progress. I’m not saying that he’s necessarily right, and I’m not say * [[List of arguments against working on AI safety]]
    2 KB (353 words) - 21:23, 6 November 2021
  • ...ers will always be optimizing for something like the scientific ability of AI systems. [[Category:AI safety]]
    1 KB (178 words) - 04:17, 30 August 2021
  • ...osaic AI? I think the scaling hypothesis implies prosaic AI, but a prosaic AI can make use of lots of different algorithms? * https://www.greaterwrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang/comment/jbD8siv7GMWxRro43
    886 bytes (118 words) - 00:45, 12 March 2021
  • ...this operationalization has been cited by many others in discussions of [[AI takeoff]]. [[Category:AI safety]]
    1,018 bytes (118 words) - 23:48, 25 February 2021
  • '''Paul Christiano''' is an AI safety researcher, previously at [[OpenAI]] and currently for some undisclosed pro [[Category:AI safety]]
    234 bytes (27 words) - 23:00, 25 February 2021
  • [[Category:AI safety]]
    416 bytes (61 words) - 23:03, 25 February 2021
  • ...of knowing about AI takeoff''' is about the "so what?" of knowing which [[AI takeoff]] scenario will happen. How will our actions change if we expect a ...ent) or short-term consumption. In contrast, with more continuous takeoff, AI prepping becomes relatively more important.
    1 KB (181 words) - 02:13, 5 March 2021
  • ...useful isn't to shift MIRI or paul; it's so that new people coming into AI safety will pick the "correct" agenda to work on with higher probability. [[Category:AI safety]]
    274 bytes (44 words) - 02:30, 28 March 2021
  • * [[Emotional difficulties of AI safety research]]
    1 KB (160 words) - 18:21, 18 July 2021
  • * [[Is AI safety no longer a scenius?]] [[Category:AI safety meta]]
    557 bytes (79 words) - 12:44, 7 February 2022
  • ...rofessionalized and prestigious. As Nielsen says (abstractly, not about AI safety in particular): "A field that is fun and stimulating when 50 people are inv ...as scenius? Or try to work on [[mechanism design]] so that the larger [[AI safety community]] is more functional than existing "eternal september" type event
    2 KB (337 words) - 02:34, 28 March 2021
  • ...ties arising from the subject matter itself, without reference to the [[AI safety community]] * [[AI safety has many prerequisites]]
    950 bytes (132 words) - 18:25, 18 July 2021
  • * a point originally made by [[Wei Dai]] is that if an AI is corrigible to its human operators, then it may have to forgo certain kin [[Category:AI safety]]
    271 bytes (45 words) - 02:30, 28 March 2021
  • ...able mathematical conjectures" to get an [[outside view]] probability of [[AI timelines]]. The [[Open Philanthropy]] report on semi-informative priors is [[Category:AI safety]]
    824 bytes (111 words) - 00:14, 12 July 2021
  • ...ld the AI in less than a year (i.e. not including any of the data that the AI will use to learn from)? [[Category:AI safety]]
    523 bytes (98 words) - 23:32, 18 June 2021
  • ...esis that once the first highly capable AI system is developed, thereafter AI systems will extremely rapidly improve to the level of [[superintelligence] The term was coined by [[Ben Garfinkel]] in "[[On Classic Arguments for AI Discontinuities]]".<ref>https://docs.google.com/document/d/1lgcBauWyYk774gB
    1 KB (150 words) - 23:22, 25 May 2021
  • ...n''' is an economist who has also written a lot about the future including AI stuff. [[Category:AI safety]]
    122 bytes (20 words) - 23:33, 18 June 2021
  • [[Category:AI safety]]
    500 bytes (68 words) - 20:18, 11 August 2021
  • [[Category:AI safety]]
    358 bytes (48 words) - 20:19, 11 August 2021
  • ...be convinced to give high approval to basically ''every'' action. Once the AI becomes good enough at persuasion, it wouldn't necessarily be malicious, bu Maybe the answer is something like "But you gradually train the AI, so at every point it's seeking the approval of a smarter amplified system,
    846 bytes (132 words) - 17:24, 15 October 2021
  • [[Category:AI safety]]
    242 bytes (35 words) - 23:37, 8 November 2021
  • [[Category:AI safety]]
    184 bytes (23 words) - 23:38, 8 November 2021
  • * [[Human safety problem]] * [[Difficulty of AI alignment]]
    243 bytes (33 words) - 10:59, 26 February 2022
  • [[Aligning smart AI using slightly less smart AI]] [[Category:AI safety]]
    141 bytes (17 words) - 11:10, 26 February 2022
  • ...tems slightly smarter than ourselves, and from there, each "generation" of AI systems will align slightly smarter systems, and so on. [[Category:AI safety]]
    846 bytes (111 words) - 11:21, 26 February 2022
  • ...ct is an important part of the plan for preventing [[existential doom from AI]]. * Make progress on the full (i.e. not restricted to a limited AI system like present-day systems or [[minimal AGI]]) alignment problem faste
    2 KB (218 words) - 15:15, 26 February 2022
  • [[Category:AI safety]]
    244 bytes (26 words) - 14:42, 26 February 2022
  • [[Category:AI safety]]
    172 bytes (19 words) - 20:55, 18 March 2022
  • [[Category:AI safety]]
    144 bytes (25 words) - 22:25, 12 April 2022
  • ...lesswrong.com/s/n945eovrA3oDueqtq/p/hwxj4gieR7FWNwYfa Ngo and Yudkowsky on AI capability gains] ...hether there will be a period of rapid economics progress from "pre-scary" AI before "scary" cognition appears (Eliezer doesn't think this is likely, but
    6 KB (948 words) - 21:27, 1 August 2022

View (previous 100 | next 100) (20 | 50 | 100 | 250 | 500)