Oldest pages
Showing below up to 50 results in range #91 to #140.
View (previous 50 | next 50) (20 | 50 | 100 | 250 | 500)
- List of timelines for futuristic technologies (01:01, 26 March 2021)
- List of terms used to describe the intelligence of an agent (20:56, 26 March 2021)
- Stupid questions (20:58, 26 March 2021)
- Spaced proof review as a way to understand key insights in a proof (23:54, 26 March 2021)
- Different mental representations of mathematical objects is a blocker for an exploratory medium of math (02:27, 28 March 2021)
- AI safety is harder than most things (02:28, 28 March 2021)
- AI safety is not a community (02:28, 28 March 2021)
- AI safety lacks a space to ask stupid or ballsy questions (02:28, 28 March 2021)
- AI safety technical pipeline does not teach how to start having novel thoughts (02:28, 28 March 2021)
- Add the complete proof on proof cards to reduce friction when reviewing (02:29, 28 March 2021)
- Corrigibility may be undesirable (02:30, 28 March 2021)
- Debates shift bystanders' beliefs (02:30, 28 March 2021)
- Depictions of learning in The Blue Lagoon are awful (02:30, 28 March 2021)
- Discursive texts are difficult to ankify (02:31, 28 March 2021)
- Flag things to fix during review (02:32, 28 March 2021)
- Giving advice in response to generic questions is difficult but important (02:32, 28 March 2021)
- How doomed are ML safety approaches? (02:32, 28 March 2021)
- How meta should AI safety be? (02:33, 28 March 2021)
- Ignore Anki add-ons to focus on fundamentals (02:33, 28 March 2021)
- Is AI safety no longer a scenius? (02:34, 28 March 2021)
- It is difficult to find people to bounce ideas off of (02:34, 28 March 2021)
- It is difficult to get feedback on published work (02:34, 28 March 2021)
- Make Anki cards based on feedback you receive (02:34, 28 March 2021)
- Mass shift to technical AI safety research is suspicious (02:35, 28 March 2021)
- Newcomers in AI safety are silent about their struggles (02:35, 28 March 2021)
- Nobody understands what makes people snap into AI safety (02:35, 28 March 2021)
- Ongoing friendship and collaboration is important (02:35, 28 March 2021)
- Online question-answering services are unreliable (02:36, 28 March 2021)
- Spaced repetition prevents unrecalled unrecallables (02:37, 28 March 2021)
- Stream of low effort questions helps with popularity (02:38, 28 March 2021)
- There is pressure to rush into a technical agenda (02:38, 28 March 2021)
- Unreliability of online question-answering services makes it emotionally taxing to write up questions (02:39, 28 March 2021)
- Use paper during spaced repetition reviews (02:39, 28 March 2021)
- Use temporary separate Anki decks to learn new cards based on priority (02:39, 28 March 2021)
- Will it be possible for humans to detect an existential win? (02:40, 28 March 2021)
- Will there be significant changes to the world prior to some critical AI capability threshold being reached? (02:40, 28 March 2021)
- Existing implementations of card sharing have nontrivial overhead (17:21, 29 March 2021)
- Value learning (04:53, 30 March 2021)
- Combinatorial explosion in math (20:31, 30 March 2021)
- List of technical AI alignment agendas (21:29, 2 April 2021)
- Simple core (21:32, 2 April 2021)
- Late singularity (21:33, 4 April 2021)
- AI timelines (01:36, 5 April 2021)
- Laplace's rule of succession argument for AI timelines (02:04, 5 April 2021)
- Statistical analysis of expert timelines argument for AI timelines (05:10, 9 April 2021)
- Convergent evolution of values (17:50, 9 April 2021)
- Interacting with copies of myself (20:40, 12 April 2021)
- Selection effect for successful formalizations (20:41, 12 April 2021)
- Setting up Windows (06:34, 20 April 2021)
- The Secret of Psalm 46 outline (21:01, 23 April 2021)