Pages without language links

Jump to: navigation, search

The following pages do not link to other language versions.

Showing below up to 329 results in range #1 to #329.

View (previous 500 | next 500) (20 | 50 | 100 | 250 | 500)

  1. 3Blue1Brown
  2. AI prepping
  3. AI safety field consensus
  4. AI safety is harder than most things
  5. AI safety is not a community
  6. AI safety lacks a space to ask stupid or ballsy questions
  7. AI safety technical pipeline does not teach how to start having novel thoughts
  8. AI takeoff
  9. AI timelines
  10. AI will solve everything argument against AI safety
  11. Add all permutations of a card to prevent pattern-matching
  12. Add easy problems as cards with large graduating interval
  13. Add the complete proof on proof cards to reduce friction when reviewing
  14. Agent foundations
  15. Aligning smart AI using slightly less smart AI
  16. AlphaGo
  17. AlphaGo as evidence of discontinuous takeoff
  18. Analyzing disagreements
  19. Andy Matuschak
  20. Anki
  21. Anki deck options
  22. Anki deck philosophy
  23. Anki reviews are more fun on mobile
  24. Application of functional updateless timeless decision theory to everyday life
  25. Architecture
  26. Are due counts harmful?
  27. Asymmetric institution
  28. Asynchronous support
  29. Big card
  30. Big cards can be good for mathematical discovery
  31. Booster card
  32. Braid
  33. Braid for math
  34. Bury effortful cards to speed up review
  35. Busy life periods and spaced inbox
  36. Can spaced repetition interfere with internal sense of relevance?
  37. Can the behavior of approval-direction be undefined or random?
  38. Card sharing
  39. Card sharing allows less valuable cards to be created
  40. Cards created by oneself can be scheduled more aggressively
  41. Carl Shulman
  42. Central node trick for remembering equivalent properties
  43. Changing selection pressures argument
  44. Choosing problems for spaced proof review
  45. Christiano's operationalization of slow takeoff
  46. Cognitive biases that are opposites of each other
  47. Coherence and goal-directed agency discussion
  48. Combinatorial explosion in math
  49. Comparison of AI takeoff scenarios
  50. Comparison of pedagogical scenes
  51. Comparison of sexually transmitted diseases
  52. Comparison of terms related to agency
  53. Competence gap
  54. Content sharing between AIs
  55. Continually make new cards
  56. Continuous takeoff
  57. Convergent evolution of values
  58. Corrigibility
  59. Corrigibility may be undesirable
  60. Counterfactual of dropping a seed AI into a world without other capable AI
  61. Creative forgetting
  62. Credit card research 2021
  63. Dealing with bad problems in spaced proof review
  64. Debates shift bystanders' beliefs
  65. Deck options for proof cards
  66. Deck options for small cards
  67. Deconfusion
  68. Definitions last
  69. Deliberate practice for learning proof-based math
  70. Depictions of learning in The Blue Lagoon are awful
  71. Desiderata for dissolving the question
  72. Different mental representations of mathematical objects is a blocker for an exploratory medium of math
  73. Different senses of claims about AGI
  74. Difficulty of AI alignment
  75. Discontinuities in usefulness of whole brain emulation technology
  76. Discovery fiction
  77. Discursive texts are difficult to ankify
  78. Distillation is not enough
  79. Do an empty review of proof cards immediately after adding to prevent backlog
  80. Doomer argument against AI safety
  81. Dual ratings for spaced inbox
  82. Duolingo
  83. Duolingo does repetition at the lesson level
  84. Duolingo for math
  85. Emotional difficulties of AI safety research
  86. Emotional difficulties of spaced repetition
  87. Empty review
  88. Encoding dependence problem
  89. Equivalence classes of prompts
  90. Evolution
  91. Exhaustive quizzing allows impatient learners to skip the reading
  92. Existential win
  93. Existing implementations of card sharing have nontrivial overhead
  94. Expert response heuristic for prompt writing
  95. Explanation science
  96. Explorable explanation
  97. Explosive aftermath
  98. Fake motivation
  99. Feeling like a perpetual student in a subject due to spaced repetition
  100. Feynman technique fails when existing explanations are bad
  101. Finding the right primitives for spaced repetition responses
  102. Finiteness assumption in explorable media
  103. Flag things to fix during review
  104. Fractally misfit
  105. Fractional progress argument for AI timelines
  106. Future planning
  107. Giving advice in response to generic questions is difficult but important
  108. Goalpost for usefulness of HRAD work
  109. HCH
  110. Hardware-driven vs software-driven progress
  111. Hardware argument for AI timelines
  112. Hardware overhang
  113. Highly reliable agent designs
  114. Hnous927
  115. How doomed are ML safety approaches?
  116. How meta should AI safety be?
  117. How similar are human brains to chimpanzee brains?
  118. Human safety problem
  119. Hyperbolic growth
  120. If you want to succeed in the video games industry
  121. Ignore Anki add-ons to focus on fundamentals
  122. Importance of knowing about AI takeoff
  123. Improvement curve for good people
  124. Incremental reading
  125. Incremental reading in Anki
  126. Instruction manuals vs giving the answers
  127. Integration card
  128. Intelligence amplification
  129. Inter-personal comparison test
  130. Interacting with copies of myself
  131. Interaction reversal between knowledge-to-be-memorized and ideas-to-be-developed
  132. Intra-personal comparison test
  133. Is AI safety no longer a scenius?
  134. It is difficult to find people to bounce ideas off of
  135. It is difficult to get feedback on published work
  136. Iterated amplification
  137. Iteration cadence for spaced repetition experiments
  138. Jelly no Puzzle
  139. Jessica Taylor
  140. Jonathan Blow
  141. Kanzi
  142. Kasparov window
  143. Laplace's rule of succession argument for AI timelines
  144. Large graduating interval as a way to prevent pattern-matching
  145. Large graduating interval as substitute for putting effort into making atomic cards
  146. Late 2021 MIRI conversations
  147. Late singularity
  148. Learning-complete
  149. Linked list proof card
  150. List of AI safety projects I could work on
  151. List of arguments against working on AI safety
  152. List of big discussions in AI alignment
  153. List of breakthroughs plausibly needed for AGI
  154. List of critiques of iterated amplification
  155. List of disagreements in AI safety
  156. List of experiments with Anki
  157. List of interesting search engines
  158. List of men by number of sons, daughters, and wives
  159. List of people who have thought a lot about spaced repetition
  160. List of reasons something isn't popular or successful
  161. List of success criteria for HRAD work
  162. List of teams at OpenAI
  163. List of technical AI alignment agendas
  164. List of techniques for making small cards
  165. List of techniques for managing working memory in explanations
  166. List of terms used to describe the intelligence of an agent
  167. List of thought experiments in AI safety
  168. List of timelines for futuristic technologies
  169. Live math video
  170. Lumpiness
  171. MIRI vs Paul research agenda hypotheses
  172. Main Page
  173. Maintaining habits is hard, and spaced repetition is a habit
  174. Make Anki cards based on feedback you receive
  175. Make new cards when you get stuck
  176. Managing micro-movements in learning
  177. Mapping mental motions to parts of a spaced repetition algorithm
  178. Mass shift to technical AI safety research is suspicious
  179. Medium that reveals flaws
  180. Meta-execution
  181. Michael Nielsen
  182. Minimal AGI
  183. Minimal AGI vs task AGI
  184. Missing gear for intelligence
  185. Missing gear vs secret sauce
  186. Mixed messaging regarding independent thinking
  187. My beginner incremental reading questions
  188. My current thoughts on the technical AI safety pipeline (outside academia)
  189. My take on RAISE
  190. My understanding of how IDA works
  191. Narrow vs broad cognitive augmentation
  192. Narrow window argument against continuous takeoff
  193. Newcomers in AI safety are silent about their struggles
  194. Nobody understands what makes people snap into AI safety
  195. Number of relevant actors around the time of creation of AGI
  196. One-sentence summary card
  197. One wrong number problem
  198. Ongoing friendship and collaboration is important
  199. Online question-answering services are unreliable
  200. Open-ended questions are common in real life
  201. OpenAI
  202. Optimal unlocking mechanism for booster cards is unclear
  203. Page template
  204. Paperclip maximizer
  205. Pascal's mugging and AI safety
  206. Paul Christiano
  207. People are bad
  208. People watching
  209. Personhood API vs therapy axis of interpersonal interactions
  210. Philosophical difficulty
  211. Physical vs digital clutter
  212. Piotr Wozniak
  213. Pivotal act
  214. Politicization of AI
  215. Popularity symbiosis
  216. Potpourri hypothesis for math education
  217. Probability and statistics as fields with an exploratory medium
  218. Progress in self-improvement
  219. Proof card
  220. Prosaic AI
  221. Quotability vs ankifiability
  222. Rapid capability gain vs AGI progress
  223. Reference class forecasting on human achievements argument for AI timelines
  224. Repetition granularity
  225. Representing impossibilities
  226. Resource overhang
  227. Reverse side card for everything
  228. Richard Ngo
  229. Robin Hanson
  230. Scaling hypothesis
  231. Scenius
  232. Science argument
  233. Second species argument
  234. Secret sauce for intelligence
  235. Secret sauce for intelligence vs specialization in intelligence
  236. Selection effect for successful formalizations
  237. Selection effect for who builds AGI
  238. Self-graded prompts made for others must provide guidance for grading
  239. Setting up Windows
  240. Short-term preferences-on-reflection
  241. Should booster cards be marked as new?
  242. Simple core
  243. Simple core of consequentialist reasoning
  244. Single-architecture generality
  245. Single-model generality
  246. Small card
  247. Snoozing epicycle
  248. Soft-hard takeoff
  249. Something like realism about rationality
  250. Soren Bjornstad
  251. Spaced everything
  252. Spaced inbox ideas
  253. Spaced inbox review should not be completionist or obligatory
  254. Spaced proof review
  255. Spaced proof review as a way to invent novel proofs
  256. Spaced proof review as a way to understand key insights in a proof
  257. Spaced proof review is not about memorizing proofs
  258. Spaced proof review routine
  259. Spaced repetition
  260. Spaced repetition allows graceful deprecation of experiments
  261. Spaced repetition and cleaning one's room
  262. Spaced repetition as generator of questions
  263. Spaced repetition as soft alarm clock
  264. Spaced repetition constantly reminds one of inadequacies
  265. Spaced repetition is not about memorization
  266. Spaced repetition is useful because most knowledge is sparsely applicable
  267. Spaced repetition prevents unrecalled unrecallables
  268. Spaced repetition response as chat message or chat reaction
  269. Spaced repetition world
  270. Spaced writing inbox
  271. Spoiler test of depth
  272. Statistical analysis of expert timelines argument for AI timelines
  273. Steam game buying algorithm
  274. Stream of low effort questions helps with popularity
  275. Stupid questions
  276. Sudden emergence
  277. Summary of my beliefs
  278. SuperMemo
  279. SuperMemo shortcuts
  280. Switching costs of various kinds of software
  281. Tao Analysis Flashcards
  282. Tao Analysis I exercise count
  283. Tao Analysis Solutions
  284. Task-dependent diversity
  285. Test
  286. Text to speech software
  287. Textbook test for AI theory
  288. The Hour I First Believed
  289. The Precipice notes
  290. The Secret of Psalm 46
  291. The Secret of Psalm 46 outline
  292. The Sequences vs evergreen notes
  293. The Uncertain Future
  294. The Witness
  295. The mathematics community has no clear standards for what a mathematician should know
  296. There is pressure to rush into a technical agenda
  297. There is room for something like RAISE
  298. Thinking Mathematics
  299. Timeline of my involvement in AI safety
  300. Tinkering in math requires loading the situation into working memory
  301. Tips for reviving a spaced repetition practice
  302. Tricky examples in basic probability
  303. Tutoring heuristic for prompt writing
  304. UDASSA
  305. Unbounded working memory assumption in explanations
  306. Uninsightful articles can seem insightful due to unintentional spaced repetition
  307. Unintended consequences of AI safety advocacy argument against AI safety
  308. Unreliability of online question-answering services makes it emotionally taxing to write up questions
  309. Use paper during spaced repetition reviews
  310. Use temporary separate Anki decks to learn new cards based on priority
  311. Using Anki for math
  312. Using spaced repetition to improve public discourse
  313. Value learning
  314. Video games allow immediate exploration
  315. Video games comparison to math
  316. Vow of silence
  317. We still don't know how to systematically write great word explanations
  318. Website to aggregate solutions to textbook exercises
  319. Wei Dai
  320. Weird recursion
  321. What counts as good motivation?
  322. What makes a word explanation good?
  323. What would a vow of silence look like for math?
  324. Whole brain emulation
  325. Why ain'tcha better at math
  326. Will it be possible for humans to detect an existential win?
  327. Will there be significant changes to the world prior to some critical AI capability threshold being reached?
  328. Word explanations are already great
  329. You don't need to eat your own dogfood in explanation science

View (previous 500 | next 500) (20 | 50 | 100 | 250 | 500)