Desiderata for dissolving the question
- I think when most people think about philosophy topics, they just stop thinking about it once they no longer feel confused? Like, they don't start out with a bunch of desiderata for what counts as "solving the problem"; instead, they just go along reading/thinking until it feels like the confusion they had is cleared up.
- I think there are three things a cognitive reduction must do:
- 1. Provide the correct solution/mechanism, i.e. how the brain actually works. For free will, this would be explaining how the world is deterministic but our brains build a model of it so can do various hypothetical interventions, and how without all this work we would have output very different actions.
- 2. Explain the cognitive cause of the original error. For free will, this would be Eliezer's diagram of how the "me" node and "physics" node compete to influence the "future" node. This part should explain why a mind that was built that way would have the same confusion about the phenomenon that humans have. This is like Dennett's idea of heterophenomenology: for consciousness, this would have to explain human remarks such as "but why does it feel like anything?"
- There are actually several different resolutions at which you can do this:
- toy model that exhibits the same cognition
- a slightly more realistic model, based on e.g. results from neuroscience, that exhibits the cognition
- a complete "gears level" understanding, i.e. something that would let you build a human (or that component of a human) from scratch
- There are actually several different resolutions at which you can do this:
- 3. Explain the evolutionary cause. This should explain why the mind that generates the error is adaptive, or why evolution spit out such minds despite it not being so great. How did the intuition get there in the first place?
- For (3), you could also distinguish between explaining the evolution of the error and the evolution of the mechanism explained in (1). So e.g. for free will, you could explain the evolution of the choice-making machinery of (1) (obviously useful!), or it could mean explaining the evolution of the cognitive error of (2) (not obviously useful; requires more explanation).
- In the case of free will, there seems to be a confusion about different levels (model vs reality), what Stanovich calls decoupling. It seems possible that this error results from lack of computation or sloppiness at distinguishing different levels? But why does it happen so often when thinking about cognitive reductions and not, say, about recursive programs? More generally, we seem to think about ourselves as Cartesian-by-default rather than embedded, like we are acting on the world which is separate from us, that we are "bigger than the universe" in the sense of being able to contain all of it in our minds. But why does this error happen? One reason might just be that embedded agency is too hard to think about (cf. how basically all non-MIRI work about agency is Cartesian).
- I guess Eliezer does discuss some of this, when he talks about mind projection fallacy. For some reason, the human mind, when it tries to explain why something is the way it is, "projects" the confusion onto the object, instead of saying "the object is non-mysterious, but I am confused as to how it works, and the confusion is a property of my mental state". But I don't think Eliezer quite explains why the human mind works this way -- why evolution stumbled onto such minds.
- In the case of free will, there seems to be a confusion about different levels (model vs reality), what Stanovich calls decoupling. It seems possible that this error results from lack of computation or sloppiness at distinguishing different levels? But why does it happen so often when thinking about cognitive reductions and not, say, about recursive programs? More generally, we seem to think about ourselves as Cartesian-by-default rather than embedded, like we are acting on the world which is separate from us, that we are "bigger than the universe" in the sense of being able to contain all of it in our minds. But why does this error happen? One reason might just be that embedded agency is too hard to think about (cf. how basically all non-MIRI work about agency is Cartesian).
- For (3), you could also distinguish between explaining the evolution of the error and the evolution of the mechanism explained in (1). So e.g. for free will, you could explain the evolution of the choice-making machinery of (1) (obviously useful!), or it could mean explaining the evolution of the cognitive error of (2) (not obviously useful; requires more explanation).
- I think Drescher talks about how when you resolve a paradox, you need to explain both the correct solution (so (1) above) and also explain why the wrong solution is wrong (which is like (2) above). But I'm saying you have to go one step further and explain (3) as well. Eliezer free will stuff does (1) and (2) but not (3) I think.
- Looking at Good and Real, Drescher calls these countering and invalidating. However, it looks like invalidating is slightly different from (2) in that it's broader: one possible way to invalidate is to explain how the brain produces the illusion, but you might also invalidate by going through an opponent's argument point-by-point to point out the location of first error.
- A further complication for consciousness: when talking about why consciousness evolved, people sometimes confuse access consciousness with phenomenal consciousness. The evolution of the former is easy to explain but the latter is harder.
- Why does consciousness feel different from other cognitive reductions?
- I think the reason is that in the case of consciousness, even if we roughly implement a program that did things in the way the human brain supposedly does things (e.g. Cartesian Camcorder of Drescher), it would not correspond to our intuitive feeling of consciousness. Whereas with e.g. free will, we can see that even a simple chess-playing program has free will in the relevant sense.
- Another break down:
- Understand what the machine is doing
- Understand why evolution would build such a machine
- Understand how the machine makes the error
- Understand why evolution would build a machine that makes such an error
- Understand what the machine is doing