Search results

Jump to: navigation, search

Page title matches

  • ...sagreements in AI safety''' which collects the list of things people in AI safety seem to most frequently and deeply disagree about. ...d/1wI21XP-lRa6mi5h0dq_USooz0LpysdhS/view Clarifying some key hypotheses in AI alignment].</ref> (there are more posts like this, i think? find them)
    21 KB (3,254 words) - 11:00, 26 February 2022
  • People in [[AI safety]] tend to [[List of disagreements in AI safety|disagree about many things]]. However, there is also wide agreement about s * advanced AI will have a huge impact on the world
    2 KB (272 words) - 01:33, 13 May 2020
  • ...(all of our other problems are so pressing that we're willing to gamble on AI working out by default). I don't think this argument makes much sense. ...at’s one of the reasons why I’m focusing on AI safety, rather than bio-safety.</p>
    1 KB (261 words) - 20:02, 23 June 2021
  • * [[AI safety technical pipeline does not teach how to start having novel thoughts]] * [[AI safety is not a community]]
    931 bytes (138 words) - 01:30, 20 May 2020
  • Currently, the [[AI safety community]] does not have an explicit mechanism for teaching new people how [[Category:AI safety meta]]
    1 KB (225 words) - 02:28, 28 March 2021
  • ...lly, I think I've been in communities before, and being a part of the [[AI safety community]] does not feel like that. * AI safety has left the "hobbyist stage". People can actually now get paid to think ab
    894 bytes (159 words) - 02:28, 28 March 2021
  • [[Category:AI safety meta]]
    1 KB (198 words) - 02:28, 28 March 2021
  • [[Category:AI safety meta]]
    283 bytes (42 words) - 21:27, 18 May 2020
  • ...nice thank you letter from a different person. In contrast, working on AI safety feels like .... there's absolutely no feedback on whether I'm doing anythin When you've been at AI safety for too long, you're so used to just "[staring] at a blank sheet of paper u
    1 KB (190 words) - 02:28, 28 March 2021
  • ...g else to work on AI safety''. Cue all the rationalists who "believe in AI safety" but don't do anything about it. ..., and the people who are actually spending their full-time attention on AI safety? I think this is a very important question, and I don't think anybody under
    2 KB (268 words) - 02:35, 28 March 2021
  • ...tty hard to find people openly complaining about how to get involved in AI safety. You can find some random comments, and there are occasional Facebook threa # For people who want to do technical AI safety research, how are they deciding between MIRI vs Paul vs other technical age
    1 KB (195 words) - 02:35, 28 March 2021
  • ...rough existing evidence and getting better evidence (e.g. from progress in AI research), rather than due to less virtuous reasons like groupthink/prestig [[Category:AI safety meta]]
    910 bytes (138 words) - 02:35, 28 March 2021
  • ...[[Counterfactual of dropping a seed AI into a world without other capable AI]] [[Category:AI safety]]
    448 bytes (72 words) - 07:27, 20 May 2020
  • This is a '''list of arguments against working on AI safety'''. Personally I think the only one that's not totally weak is opportunity ...out how to affect the long-term future. See also [[Pascal's mugging and AI safety]].
    8 KB (1,245 words) - 00:29, 24 July 2022
  • * writing some sort of overview of my beliefs regarding AI safety. like, if i was explaining things from scratch to someone, what would that * my current take on [[AI timelines]] (vacation tier)
    6 KB (927 words) - 14:25, 4 February 2022
  • I often go back and forth between the following two approaches to AI safety: ...s/conferences. Trust that making a better community will lead to better AI safety work being done.
    873 bytes (144 words) - 02:33, 28 March 2021
  • ...to [[Pascal's mugging]]. The critic of AI safety argues that working on AI safety has a very small probability of a very big payoff, which sounds suspicious. * Argue that reducing x-risk from AI safety is more like a 1% chance than like an astronomically small chance.
    1 KB (147 words) - 22:16, 17 November 2020
  • #redirect [[Pascal's mugging and AI safety]]
    44 bytes (6 words) - 23:24, 12 November 2020
  • ...as to cause the creation of DeepMind and OpenAI, and to accelerate overall AI progress. I’m not saying that he’s necessarily right, and I’m not say * [[List of arguments against working on AI safety]]
    2 KB (353 words) - 21:23, 6 November 2021
  • ...rofessionalized and prestigious. As Nielsen says (abstractly, not about AI safety in particular): "A field that is fun and stimulating when 50 people are inv ...as scenius? Or try to work on [[mechanism design]] so that the larger [[AI safety community]] is more functional than existing "eternal september" type event
    2 KB (337 words) - 02:34, 28 March 2021

Page text matches

  • * [[:Category:AI safety]] -- notes on AI safety strategy
    572 bytes (84 words) - 21:22, 19 March 2021
  • ...ent of the insights), what does that mean, in terms of what to do about AI safety? ...lw2.issarice.com/posts/mJ5oNYnkYrd4sD5uE/clarifying-some-key-hypotheses-in-ai-alignment] captures some of these, but i don't think this is a minimal set
    3 KB (528 words) - 20:58, 26 March 2021
  • [[Category:AI safety]]
    315 bytes (43 words) - 21:52, 28 November 2020
  • ...standing is that MIRI people/other smart people have prioritized technical AI alignment over WBEs because while WBEs would be safer if they came first, p * is there anything else relevant to AI strategy that i should know about?
    6 KB (850 words) - 19:16, 27 February 2021
  • ...n AI safety. Resolving this is important for thinking about the shape of [[AI takeoff]]. ...mpy]]", i.e. coming in a small number of chunks that contribute greatly to AI capabilities; there are a small number of discrete insights required to cre
    13 KB (1,917 words) - 23:45, 19 May 2021
  • ...rstand good consequentialist reasoning in order to design a highly capable AI system, I’d be less worried by a decent margin." the general MIRI view th ...ve this for aligned AI systems, but not believe it for unaligned/arbitrary AI systems.
    1 KB (212 words) - 22:14, 28 April 2020
  • * Safety [https://80000hours.org/podcast/episodes/danny-hernandez-forecasting-ai-progress/] ...nking that enable humans to meaningfully understand, supervise and control AI systems." [http://webcache.googleusercontent.com/search?q=cache:WxgzREJyPTk
    6 KB (882 words) - 05:20, 7 October 2020
  • https://aiimpacts.org/how-ai-timelines-are-estimated/ ...strategy is the best approach." [https://www.technologyreview.com/s/615181/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/
    649 bytes (76 words) - 03:59, 26 April 2020
  • ...e there is a stereotypical "soft takeoff" until around the point where the AI has somewhat-infra-human level general intelligence, and then once it cross ...the prior emergence of potentially-strategically-decisive AI — that is, AI capabilities that are potentially decisive when employed by some group of i
    8 KB (1,206 words) - 01:43, 2 March 2021
  • ...e is a small core of good consequentialist reasoning that is important for AI capabilities and that can be discovered through theoretical research." http [[Category:AI safety]]
    1 KB (217 words) - 19:11, 27 February 2021
  • ...tions about disagreements, particularly disagreements in ai safety about [[AI timelines]], [[takeoff speed]], [[simple core algorithm of agency]], and so ...t strong arguments on multiple sides.) Given this theory, it feels like AI safety should be a one-sided debate; it's a simple matter of fact, so we shouldn't
    7 KB (1,087 words) - 22:52, 8 February 2021
  • In the context of [[AI timelines]], the '''hardware argument''' is a common argument structure for ...0/10/why_early_singularities_are_softer.html and https://aiimpacts.org/how-ai-timelines-are-estimated/
    5 KB (740 words) - 00:24, 12 July 2021
  • ** AI safety vs something else? right now AI safety seems like the best candidate for the biggest/soonest change, but i want to ** if AI safety, then what technical agenda seems best? this matters for (1) deciding what
    2 KB (285 words) - 18:53, 7 September 2020
  • I want to understand better the MIRI case for thinking that ML-based safety approaches (like [[Paul Christiano]]'s agenda) are so hopeless as to not be # a highly intelligent AI would see things humans cannot see, can arrive at unanticipated solutions,
    5 KB (765 words) - 02:32, 28 March 2021
  • ...ned/doing things that are "good for Hugh" in some sense; (2) the resulting AI is competitive; (3) Hugh doesn't have a clue what is going on. Many explana ...arial training, verification, transparency, and other measures to keep the AI aligned fit into the scheme. This is a separate confusion I (and [https://w
    9 KB (1,597 words) - 00:27, 6 October 2020
  • [[Category:AI safety]]
    770 bytes (115 words) - 19:10, 27 February 2021
  • ...self-improvement." [https://lw2.issarice.com/posts/5WECpYABCT62TJrhY/will-ai-undergo-discontinuous-progress] "When we build AGI we will be optimizing the chimp-equivalent-AI for usefulness, and it will look nothing like an actual chimp (in fact it w
    3 KB (426 words) - 20:51, 15 March 2021
  • ...rategic advantage? / Unipolar outcome? (i.e. not distributed) Can a single AI project get massively ahead (either by investing way more effort into build ...an does our economy today. The issue is the relative rate of growth of one AI system, across a broad range of tasks, relative to the entire rest of the w
    4 KB (635 words) - 00:50, 5 March 2021
  • ...d its successor '''AlphaGo Zero''' are used to make various points in [[AI safety]]. * a single architecture / basic AI technique working for many different games ([[single-architecture generalit
    5 KB (672 words) - 20:19, 11 August 2021
  • * [[prosaic AI]] [[Category:AI safety]]
    1 KB (125 words) - 03:58, 26 April 2020

View (previous 20 | next 20) (20 | 50 | 100 | 250 | 500)