User contributions
(newest | oldest) View (newer 50 | older 50) (20 | 50 | 100 | 250 | 500)
- 00:18, 6 June 2020 (diff | hist) . . (+215) . . N Make new cards when you get stuck (Created page with "When doing spaced proof review or even during normal Anki reviews, if you get stuck on a particular thing, that's a good thing to turn into a new (small) An...")
- 06:53, 3 June 2020 (diff | hist) . . (+191) . . N Missing gear vs secret sauce (Created page with "I want to distinguish between the following two framings: * missing gear/one wrong number function: * secret sauce for intelligence/small number of breakthroughs: Cate...")
- 21:11, 1 June 2020 (diff | hist) . . (+129) . . N List of success criteria for HRAD work (Created page with " ==See also== * Something like realism about rationality * Goalpost for usefulness of HRAD work Category:AI safety")
- 01:03, 30 May 2020 (diff | hist) . . (+88) . . N AI timelines (Created page with "For now, see List of disagreements in AI safety#AI timelines Category:AI safety")
- 22:34, 29 May 2020 (diff | hist) . . (+409) . . N Unreliability of online question-answering services makes it emotionally taxing to write up questions (Created page with "Online question-answering services are unreliable, and this unreliability makes it emotionally taxing to write up questions because you aren't sure if the time you spend w...")
- 22:30, 29 May 2020 (diff | hist) . . (+554) . . N Online question-answering services are unreliable (Created page with "In my experience,<ref>https://math.stackexchange.com/users/35525/riceissa?tab=questions</ref><ref>https://stats.stackexchange.com/users/273265/riceissa?tab=questions</ref><ref...")
- 08:59, 27 May 2020 (diff | hist) . . (+526) . . N Ignore Anki add-ons to focus on fundamentals (Created page with "There are many add-ons available for Anki. So far I've ignored all of them; never in the past two years have I installed an add-on for Anki. It took me over a year I think...")
- 04:22, 27 May 2020 (diff | hist) . . (+1,828) . . N Competence gap (Created page with "to what extent paul's approach looks like humans trying to align arbitrarily large black boxes ("corralling hostile superintelligences") vs humans+pretty smart aligned AIs try...")
- 01:07, 27 May 2020 (diff | hist) . . (+464) . . N Corrigibility (Created page with "'''Corrigibility''' is a term used in AI safety with multiple/unclear meanings. I think the term was originally used by MIRI to mean something like an AI that allowed hum...")
- 00:48, 27 May 2020 (diff | hist) . . (+209) . . N Highly reliable agent designs (Created page with "'''Highly reliable agent designs''' is the kind of pure-math research related to agency, etc. that is done at MIRI. List of disagreements in AI safety#Highly reliable agent...")
- 00:47, 27 May 2020 (diff | hist) . . (+43) . . N HRAD (Redirected page to Highly reliable agent designs) (current) (Tag: New redirect)
- 00:47, 27 May 2020 (diff | hist) . . (+472) . . N Goalpost for usefulness of HRAD work (Created page with "There's a pattern I see where: * people advocating HRAD research bring up historical cases like Turing, Shannon, etc. where formalization worked well * people arguing aga...")
- 05:55, 26 May 2020 (diff | hist) . . (+61) . . N Hardware overhang (Created page with "==See also== * Resource overhang Category:AI safety")
- 02:45, 26 May 2020 (diff | hist) . . (+461) . . N Resource overhang (Created page with " ==Resource overhang and AI takeoff== Whether we are already in hardware overhang / other "resource bonanza". So whether we are in overhang depends on whether future earl...")
- 20:25, 23 May 2020 (diff | hist) . . (+43) . . N Quick review (Redirected page to Bury cards to speed up review) (current) (Tag: New redirect)
- 03:51, 22 May 2020 (diff | hist) . . (+63) . . N Soft takeoff keyhole (Redirected page to Narrow window argument against continuous takeoff) (current) (Tag: New redirect)
- 03:50, 22 May 2020 (diff | hist) . . (+63) . . N Continuous takeoff keyhole (Redirected page to Narrow window argument against continuous takeoff) (current) (Tag: New redirect)
- 03:44, 22 May 2020 (diff | hist) . . (+1,031) . . N Narrow window argument against continuous takeoff (Created page with " "When you fold a whole chain of differential equations in on itself like this, it should either peter out rapidly as improvements fail to yield further improvements, or else...")
- 03:27, 22 May 2020 (diff | hist) . . (+116) . . N Anki deck options (Created page with "* Deck options for proof cards * Incremental reading in Anki#My deck options Category:Spaced repetition")
- 01:34, 22 May 2020 (diff | hist) . . (+65) . . N Incremental reading (Created page with "* Incremental reading in Anki Category:Spaced repetition")
- 01:34, 22 May 2020 (diff | hist) . . (+506) . . N Spaced repetition allows graceful deprecation of experiments (Created page with "e.g. my cloze deletion "read only" cards weren't so useful and i eventually switched to a dedicated incremental reading deck. but i can still keep reviewing my old cloze c...")
- 21:55, 21 May 2020 (diff | hist) . . (+327) . . N List of breakthroughs plausibly needed for AGI (Created page with "* Looking at things like ''The MIT Encyclopedia of the Cognitive Sciences'' and Judea Pearl's work on causality and trying to estimate how many insights are required to build...")
- 00:41, 21 May 2020 (diff | hist) . . (+288) . . N Intra-personal comparison test (Created page with "The '''intra-personal comparison test''' is a test of people watching skill. The test asks: given two pieces of work of varying quality produced by a single individual, ca...") (current)
- 00:38, 21 May 2020 (diff | hist) . . (+273) . . N Inter-personal comparison test (Created page with "The '''inter-personal comparison test''' is a test of people watching skill. Given two people who are superficially similar but in fact very different in quality, can some...")
- 00:31, 21 May 2020 (diff | hist) . . (+42) . . N People are bad (Created page with "people are bad, mayne Category:Truths") (current)
- 09:41, 20 May 2020 (diff | hist) . . (+573) . . N Secret sauce for intelligence vs specialization in intelligence (Created page with "what is the relationship between the "you can't specialize in 'intelligence'" argument and "there are a small number of insights for AGI" (a.k.a. secret sauce for intelligen...")
- 09:27, 20 May 2020 (diff | hist) . . (+29) . . N Kasparov Window (Issa moved page Kasparov Window to Kasparov window over redirect) (current) (Tag: New redirect)
- 09:17, 20 May 2020 (diff | hist) . . (+696) . . N Science argument (Created page with "The '''science argument''' says that science is this general "architectural insight" which allowed humans to have much more control over the world, and that w...")
- 07:59, 20 May 2020 (diff | hist) . . (+229) . . N Content sharing between AIs (Created page with "Part of the discussion about content vs architecture. some common points that get brought up: * content sharing is common in software engineering in general * conten...") (current)
- 07:27, 20 May 2020 (diff | hist) . . (+448) . . N List of thought experiments in AI safety (Created page with "list of thought experiments: * dropping seed AGI into world without AGI; see Counterfactual of dropping a seed AI into a world without other capable AI * the analogy of t...") (current)
- 01:33, 20 May 2020 (diff | hist) . . (+620) . . N Mass shift to technical AI safety research is suspicious (Created page with "Back in 2008-2011 when people were talking about AI safety on LessWrong, there were multiple "singularity strategies" that were proposed, of which technical AI safety was...")
- 02:10, 19 May 2020 (diff | hist) . . (+87) . . N Generic advice is difficult to give but also important (Issa moved page Generic advice is difficult to give but also important to Giving advice in response to generic questions is difficult but important) (current) (Tag: New redirect)
- 00:29, 19 May 2020 (diff | hist) . . (+275) . . N Ongoing friendship and collaboration is important (Created page with "Ongoing friendship and collaboration is to be contrasted with one-off replies you might get from people if you post something. I think of of the reasons that AI safety is n...")
- 00:16, 19 May 2020 (diff | hist) . . (+29) . . N It is difficult to get feedback on published work (Created page with " Category:AI safety meta")
- 23:30, 18 May 2020 (diff | hist) . . (+98) . . N It is difficult to find people to bounce ideas off of (Created page with "The people who can give the most useful feedback tend to be very busy Category:AI safety meta")
- 23:24, 18 May 2020 (diff | hist) . . (+1,177) . . N Newcomers in AI safety are silent about their struggles (Created page with "It's actually pretty hard to find people openly complaining about how to get involved in AI safety. You can find some random comments, and there are occasional Facebook thread...")
- 23:05, 18 May 2020 (diff | hist) . . (+825) . . N Giving advice in response to generic questions is difficult but important (Created page with "I've seen a few times people saying things like "Don't contact me with generic questions like 'What should I work on?' because I can't help you. I can answer more straightforw...")
- 21:51, 18 May 2020 (diff | hist) . . (+1,008) . . N Nobody understands what makes people snap into AI safety (Created page with "Getting even slightly interested in AI safety is hard. You need to have a mind that can understand other important things like cryonics/anti-aging, Tegmark multiverse, con...")
- 21:41, 18 May 2020 (diff | hist) . . (+464) . . N AI safety is harder than most things (Created page with "I recently got a visceral sense that '''AI safety is harder than most things''' when I started writing my [https://taoanalysis.wordpress.com/ Tao Analysis Solutions] blog. Aft...")
- 21:32, 18 May 2020 (diff | hist) . . (+1,219) . . N My take on RAISE (Created page with "From October 2018: * I'm generally optimistic about making things easier to understand/distilling things. So I like this general area that RAISE is working in. * I'm not sure...")
- 21:21, 18 May 2020 (diff | hist) . . (+96) . . N Timeline of my involvement in AI safety (Created page with "{| class="sortable wikitable" |- ! Year !! Month !! Event |- | a |} Category:AI safety meta")
- 21:10, 18 May 2020 (diff | hist) . . (+904) . . N AI safety lacks a space to ask stupid or ballsy questions (Created page with "(this page assumes that asking stupid or ballsy questions is important for learning/making intellectual progress) I think the way that voting and crossposting work on LessW...")
- 20:59, 18 May 2020 (diff | hist) . . (+2,098) . . N There is pressure to rush into a technical agenda (Created page with "AI safety has a weird dynamic going on where: * Most likely, only a single technical agenda will actually be useful. The others will have been good in expectation (in like a...")
- 20:38, 18 May 2020 (diff | hist) . . (+866) . . N Mixed messaging regarding independent thinking (Created page with "I think the AI safety community and effective altruism in general has some mixed messaging going on regarding whether it's good to be an "independent thinker". On one...") (current)
- 20:30, 18 May 2020 (diff | hist) . . (+321) . . N AI safety is not a community (Created page with "When I am feeling especially cynical and upset, it feels like '''AI safety is not a community'''. What do I mean by this? Basically, I think I've been in communities before, a...")
- 20:23, 18 May 2020 (diff | hist) . . (+319) . . N AI safety technical pipeline does not teach how to start having novel thoughts (Created page with "Currently, the AI safety community does not have an explicit mechanism for teaching new people how to start having novel thoughts. The implicit hope seems to be something...")
- 20:11, 18 May 2020 (diff | hist) . . (+71) . . N Category:AI safety meta (Created page with "For thoughts about the AI safety community and doing work in AI safety.") (current)
- 20:06, 18 May 2020 (diff | hist) . . (+73) . . N My current thoughts on the technical AI safety pipeline (outside academia) (Created page with "* There is room for something like RAISE Category:AI safety meta")
- 20:05, 18 May 2020 (diff | hist) . . (+365) . . N There is room for something like RAISE (Created page with "Self-studying all of the technical prerequisites for technical AI safety research is hard. The most that people new to the field get are a list of textbooks. I think there...")
- 01:10, 18 May 2020 (diff | hist) . . (+61) . . N Intelligence amplification (Created page with "https://nickbostrom.com/cognitive.pdf Category:AI safety") (current)
(newest | oldest) View (newer 50 | older 50) (20 | 50 | 100 | 250 | 500)