Oldest pages
Showing below up to 50 results in range #71 to #120.
View (previous 50 | next 50) (20 | 50 | 100 | 250 | 500)
- Simple core of consequentialist reasoning (19:11, 27 February 2021)
- The Uncertain Future (19:12, 27 February 2021)
- Test (19:15, 27 February 2021)
- Whole brain emulation (19:16, 27 February 2021)
- Rapid capability gain vs AGI progress (19:17, 27 February 2021)
- Selection effect for who builds AGI (19:18, 27 February 2021)
- Deconfusion (19:18, 27 February 2021)
- Continuous takeoff (22:16, 1 March 2021)
- Hyperbolic growth (00:07, 2 March 2021)
- Soft-hard takeoff (01:43, 2 March 2021)
- Comparison of AI takeoff scenarios (00:50, 5 March 2021)
- AI takeoff (01:01, 5 March 2021)
- Importance of knowing about AI takeoff (02:13, 5 March 2021)
- Scaling hypothesis (00:45, 12 March 2021)
- Asymmetric institution (21:51, 12 March 2021)
- Counterfactual of dropping a seed AI into a world without other capable AI (20:51, 15 March 2021)
- Main Page (21:22, 19 March 2021)
- OpenAI (19:54, 22 March 2021)
- One-sentence summary card (21:01, 23 March 2021)
- Central node trick for remembering equivalent properties (21:04, 23 March 2021)
- Steam game buying algorithm (23:07, 25 March 2021)
- List of timelines for futuristic technologies (01:01, 26 March 2021)
- List of terms used to describe the intelligence of an agent (20:56, 26 March 2021)
- Stupid questions (20:58, 26 March 2021)
- Spaced proof review as a way to understand key insights in a proof (23:54, 26 March 2021)
- Different mental representations of mathematical objects is a blocker for an exploratory medium of math (02:27, 28 March 2021)
- AI safety is harder than most things (02:28, 28 March 2021)
- AI safety is not a community (02:28, 28 March 2021)
- AI safety lacks a space to ask stupid or ballsy questions (02:28, 28 March 2021)
- AI safety technical pipeline does not teach how to start having novel thoughts (02:28, 28 March 2021)
- Add the complete proof on proof cards to reduce friction when reviewing (02:29, 28 March 2021)
- Corrigibility may be undesirable (02:30, 28 March 2021)
- Debates shift bystanders' beliefs (02:30, 28 March 2021)
- Depictions of learning in The Blue Lagoon are awful (02:30, 28 March 2021)
- Discursive texts are difficult to ankify (02:31, 28 March 2021)
- Flag things to fix during review (02:32, 28 March 2021)
- Giving advice in response to generic questions is difficult but important (02:32, 28 March 2021)
- How doomed are ML safety approaches? (02:32, 28 March 2021)
- How meta should AI safety be? (02:33, 28 March 2021)
- Ignore Anki add-ons to focus on fundamentals (02:33, 28 March 2021)
- Is AI safety no longer a scenius? (02:34, 28 March 2021)
- It is difficult to find people to bounce ideas off of (02:34, 28 March 2021)
- It is difficult to get feedback on published work (02:34, 28 March 2021)
- Make Anki cards based on feedback you receive (02:34, 28 March 2021)
- Mass shift to technical AI safety research is suspicious (02:35, 28 March 2021)
- Newcomers in AI safety are silent about their struggles (02:35, 28 March 2021)
- Nobody understands what makes people snap into AI safety (02:35, 28 March 2021)
- Ongoing friendship and collaboration is important (02:35, 28 March 2021)
- Online question-answering services are unreliable (02:36, 28 March 2021)
- Spaced repetition prevents unrecalled unrecallables (02:37, 28 March 2021)