Difference between revisions of "Learning-complete"

From Issawiki
Jump to: navigation, search
(Created page with "An idea I've been toying with, in analogy with terms like "NP-complete" from complexity theory: some learning/teaching tasks seem to have a universality to them, such that onc...")
 
 
(One intermediate revision by the same user not shown)
Line 5: Line 5:
 
Possible examples:
 
Possible examples:
  
* teaching method: "binary search" the student's mind by asking questions that keep getting harder; this tells you roughly how much they understand, and then you can ask more questions/give "bite-sized nuggets" of lessons/info to "feed" the student. i think this is close to the funnest way to learn (i.e. minimizes boredom no matter what level of knowledge you start out with, because the lesson adjusts to your level).
+
* teaching method: "binary search" the student's mind by asking questions that keep getting harder; this tells you roughly how much they understand, and then you can ask more questions/give "bite-sized nuggets" of lessons/info to "feed" the student. i think this is close to the funnest way to learn (i.e. minimizes boredom no matter what level of knowledge you start out with, because the lesson adjusts to your level). Related: [[exhaustive quizzing allows impatient learners to skip the reading]].
 
* So imagine we have a very accurate model that can return answers to questions like: "give me the most interesting problem I can solve in 1 hour". It seems plausible to me that you could learn a lot of math/econ/physics/whatever just by asking this question repeatedly and solving these problems. Especially if the model can take into account when you solved problems, so it will know when to re-ask questions or ask similar questions. This is basically like a generalization of [[spaced repetition]] systems (where SRS is like the giant lookup table version of this model).
 
* So imagine we have a very accurate model that can return answers to questions like: "give me the most interesting problem I can solve in 1 hour". It seems plausible to me that you could learn a lot of math/econ/physics/whatever just by asking this question repeatedly and solving these problems. Especially if the model can take into account when you solved problems, so it will know when to re-ask questions or ask similar questions. This is basically like a generalization of [[spaced repetition]] systems (where SRS is like the giant lookup table version of this model).
 
* as an oracle: you can think of the ideal teaching mechanism as like a Q&A oracle where you keep asking the thing that's on your mind until you get all the answers.
 
* as an oracle: you can think of the ideal teaching mechanism as like a Q&A oracle where you keep asking the thing that's on your mind until you get all the answers.

Latest revision as of 05:02, 16 July 2021

An idea I've been toying with, in analogy with terms like "NP-complete" from complexity theory: some learning/teaching tasks seem to have a universality to them, such that once you've figured out how to solve that kind of teaching problem, you automatically can teach any specific thing.

More weakly, you can just think of these as some "models for learning" or ways you can think about learning.

Possible examples:

  • teaching method: "binary search" the student's mind by asking questions that keep getting harder; this tells you roughly how much they understand, and then you can ask more questions/give "bite-sized nuggets" of lessons/info to "feed" the student. i think this is close to the funnest way to learn (i.e. minimizes boredom no matter what level of knowledge you start out with, because the lesson adjusts to your level). Related: exhaustive quizzing allows impatient learners to skip the reading.
  • So imagine we have a very accurate model that can return answers to questions like: "give me the most interesting problem I can solve in 1 hour". It seems plausible to me that you could learn a lot of math/econ/physics/whatever just by asking this question repeatedly and solving these problems. Especially if the model can take into account when you solved problems, so it will know when to re-ask questions or ask similar questions. This is basically like a generalization of spaced repetition systems (where SRS is like the giant lookup table version of this model).
  • as an oracle: you can think of the ideal teaching mechanism as like a Q&A oracle where you keep asking the thing that's on your mind until you get all the answers.
  • as a recreation of bloom's two sigma in a way which scales.
  • as an ML-like prediction problem: the teaching method is to just predict the next best problem you should solve, or the fact you are most likely to be about to forget.
  • learning-complete: a method/function you can call that can teach you arbitrary info in a way better than just reading/watching lectures. so i think "easiest problem you can't solve" is one, "question oracle" is another, etc.

Non-examples:

  • books -- I think it's quite hard to actually learn lots of things via books. Even people who are good at learning struggle a lot when it comes to learning things via books. Like, in some technical sense books are "complete" in that you can just write down arbitrary messages for the learner, but for practical human purposes it seems insufficient.
  • lectures -- same reason.

reductions/X-complete analogy for learning: what problems are explanations equivalent to? Eg the idea of "asking questions repeatedly".

See also