Oldest pages
Showing below up to 100 results in range #21 to #120.
View (previous 100 | next 100) (20 | 50 | 100 | 250 | 500)
- Competence gap (00:01, 30 May 2020)
- Lumpiness (06:53, 3 June 2020)
- One wrong number problem (06:54, 3 June 2020)
- Why ain'tcha better at math (08:51, 9 June 2020)
- Missing gear vs secret sauce (21:16, 9 June 2020)
- Personhood API vs therapy axis of interpersonal interactions (22:46, 9 June 2020)
- List of breakthroughs plausibly needed for AGI (07:11, 17 June 2020)
- Architecture (23:18, 23 June 2020)
- Narrow window argument against continuous takeoff (16:56, 24 June 2020)
- Progress in self-improvement (17:15, 24 June 2020)
- Goalpost for usefulness of HRAD work (20:17, 26 June 2020)
- List of success criteria for HRAD work (20:18, 26 June 2020)
- Something like realism about rationality (20:24, 26 June 2020)
- Website to aggregate solutions to textbook exercises (01:19, 30 June 2020)
- Kanzi (21:30, 30 June 2020)
- Missing gear for intelligence (21:42, 30 June 2020)
- Secret sauce for intelligence vs specialization in intelligence (23:01, 6 July 2020)
- Hardware overhang (20:58, 27 July 2020)
- Paperclip maximizer (23:29, 27 July 2020)
- Spoiler test of depth (22:18, 3 August 2020)
- Short-term preferences-on-reflection (23:00, 26 August 2020)
- Comparison of sexually transmitted diseases (19:33, 30 August 2020)
- Future planning (18:53, 7 September 2020)
- Meta-execution (19:02, 23 September 2020)
- My understanding of how IDA works (00:27, 6 October 2020)
- List of teams at OpenAI (05:20, 7 October 2020)
- Second species argument (04:22, 22 October 2020)
- Text to speech software (20:15, 9 November 2020)
- Summary of my beliefs (20:21, 11 November 2020)
- Pascal's mugging and AI safety (22:16, 17 November 2020)
- Popularity symbiosis (23:44, 25 November 2020)
- Carl Shulman (21:52, 28 November 2020)
- Existential win (01:02, 1 December 2020)
- Tao Analysis Solutions (01:39, 1 December 2020)
- Quotability vs ankifiability (21:10, 13 December 2020)
- Using spaced repetition to improve public discourse (02:30, 16 December 2020)
- Kasparov window (22:02, 4 January 2021)
- Analyzing disagreements (22:52, 8 February 2021)
- Resource overhang (03:19, 24 February 2021)
- Paul Christiano (23:00, 25 February 2021)
- HCH (23:03, 25 February 2021)
- Christiano's operationalization of slow takeoff (23:48, 25 February 2021)
- Agent foundations (19:08, 27 February 2021)
- Coherence and goal-directed agency discussion (19:09, 27 February 2021)
- Comparison of terms related to agency (19:09, 27 February 2021)
- Jessica Taylor (19:10, 27 February 2021)
- List of big discussions in AI alignment (19:10, 27 February 2021)
- Minimal AGI vs task AGI (19:10, 27 February 2021)
- Prosaic AI (19:10, 27 February 2021)
- Richard Ngo (19:11, 27 February 2021)
- Simple core of consequentialist reasoning (19:11, 27 February 2021)
- The Uncertain Future (19:12, 27 February 2021)
- Test (19:15, 27 February 2021)
- Whole brain emulation (19:16, 27 February 2021)
- Rapid capability gain vs AGI progress (19:17, 27 February 2021)
- Selection effect for who builds AGI (19:18, 27 February 2021)
- Deconfusion (19:18, 27 February 2021)
- Continuous takeoff (22:16, 1 March 2021)
- Hyperbolic growth (00:07, 2 March 2021)
- Soft-hard takeoff (01:43, 2 March 2021)
- Comparison of AI takeoff scenarios (00:50, 5 March 2021)
- AI takeoff (01:01, 5 March 2021)
- Importance of knowing about AI takeoff (02:13, 5 March 2021)
- Scaling hypothesis (00:45, 12 March 2021)
- Asymmetric institution (21:51, 12 March 2021)
- Counterfactual of dropping a seed AI into a world without other capable AI (20:51, 15 March 2021)
- Main Page (21:22, 19 March 2021)
- OpenAI (19:54, 22 March 2021)
- One-sentence summary card (21:01, 23 March 2021)
- Central node trick for remembering equivalent properties (21:04, 23 March 2021)
- Steam game buying algorithm (23:07, 25 March 2021)
- List of timelines for futuristic technologies (01:01, 26 March 2021)
- List of terms used to describe the intelligence of an agent (20:56, 26 March 2021)
- Stupid questions (20:58, 26 March 2021)
- Spaced proof review as a way to understand key insights in a proof (23:54, 26 March 2021)
- Different mental representations of mathematical objects is a blocker for an exploratory medium of math (02:27, 28 March 2021)
- AI safety is harder than most things (02:28, 28 March 2021)
- AI safety is not a community (02:28, 28 March 2021)
- AI safety lacks a space to ask stupid or ballsy questions (02:28, 28 March 2021)
- AI safety technical pipeline does not teach how to start having novel thoughts (02:28, 28 March 2021)
- Add the complete proof on proof cards to reduce friction when reviewing (02:29, 28 March 2021)
- Corrigibility may be undesirable (02:30, 28 March 2021)
- Debates shift bystanders' beliefs (02:30, 28 March 2021)
- Depictions of learning in The Blue Lagoon are awful (02:30, 28 March 2021)
- Discursive texts are difficult to ankify (02:31, 28 March 2021)
- Flag things to fix during review (02:32, 28 March 2021)
- Giving advice in response to generic questions is difficult but important (02:32, 28 March 2021)
- How doomed are ML safety approaches? (02:32, 28 March 2021)
- How meta should AI safety be? (02:33, 28 March 2021)
- Ignore Anki add-ons to focus on fundamentals (02:33, 28 March 2021)
- Is AI safety no longer a scenius? (02:34, 28 March 2021)
- It is difficult to find people to bounce ideas off of (02:34, 28 March 2021)
- It is difficult to get feedback on published work (02:34, 28 March 2021)
- Make Anki cards based on feedback you receive (02:34, 28 March 2021)
- Mass shift to technical AI safety research is suspicious (02:35, 28 March 2021)
- Newcomers in AI safety are silent about their struggles (02:35, 28 March 2021)
- Nobody understands what makes people snap into AI safety (02:35, 28 March 2021)
- Ongoing friendship and collaboration is important (02:35, 28 March 2021)
- Online question-answering services are unreliable (02:36, 28 March 2021)
- Spaced repetition prevents unrecalled unrecallables (02:37, 28 March 2021)