(See all posts in this series here.)
I conclude with Chapters 6 and 7 of the book, which apply the theory to reasoning and introspecting consciousness. Investigating these as forms of attending, mental actions, illuminates.
Chp. 6 examines deducing a conclusion from the premises that entail it. Given that every action involves attention, I show how deduction involves the narrowing of cognitive attention, capturing the idea that it is nonampliative. Drawing on mental model theory, a semantic account influential in psychology, entertaining the premises is revealed as a type of cognitive attention such that in drawing the conclusion, one narrows attention within the models entertained (Chp. 6.2). Mental model theory is one possible implementation of deduction as a solution to a Selection Problem, and whatever your preferred model, it must navigate the Selection Problem.
Investigating how we learn proofs in symbolic logic, I argue that practice shapes attention and leads to acquiring of a type of knowledge captured by the construct of a schemata (Chp. 6.3). Given my current work on attunement to targets, the agent’s propensity to attend to them, I would characterize schemata as explaining the agent’s acquired sensitivity to various target kinds. In symbolic deduction, the expert logicians’ schemata biases them, makes them sensitive to logical form, enabling them to notice avenues for valid proof. Novices are less sensitive, so what the student aims to learn is to acquire this logical sensitivity. When they do, they are more likely to attend to appropriate logical form.
I also discuss the worry about applying a logical rule. How do I execute modus ponens as opposed to entertain a sequence of thoughts that happens to conform to modus ponens (Chp. 6.1)? If the former is distinct in involving rule application, then we have to represent the rule to apply it. Yet as Lewis Carroll showed us long ago, treating the rule as a further premise leads to a vicious regress. In the psychological case, I would seem to need infinite capacity or time to do what I can manifestly do in reasoning according to modus ponens.
My theory distinguishes inputs from biases. The regress gets a hold not because we represent rules. The regress engages because rules are treated as further inputs, namely premises. Yet the theory reveals another function for rules. They bias. In doing so, they set appropriate cognitive attention needed to reason (see above). The rule could be part of learned schemata. In this way, the regress could be stopped by opening up other functional roles, once we see the structure of action clearly. Readers can evaluate this proposal in Chp. 6.4. The point is that reasoning is an action, and a theory of action must play a vital role in understanding what we do when we reason.
Finally, introspection. We too frequently give introspection a free pass. We draw on it without question, assuming that it gives an unadulterated view of consciousness. This assumption is demonstrably problematic. Elsewhere, I have argued that the empirical argument for an unconscious dorsal visual stream (zombie action) draws on introspection that gives irrelevant information and that in the rubber hand illusion, introspective probes are so ambiguous as to likely be useless.
What is notably missing is a plausible psychological model of introspection. In MoM, I present two models of introspective attention: simple and complex.
The “simple” idea is that when introspecting perceptual consciousness, the agent uses the same capacities they use to give a observational judgment, say that there is a red square. The twist is that in the output, they deploy a different conceptual response, say that they see the red square (this was Gareth Evans’ proposal). Introspective attention is just visual attention where selection is for introspective rather than perceptual report. This requires only a shift at the output, a deployment of a concept of experience (SEEING-red) rather than an observational concept (RED).
Introspective reliability piggy backs on perceptual reliability, and introspective failure on perceptual failure (Chp. 7.6 discusses introspecting hallucination). For example, in inattentional blindness, theorists have puzzled about what people see outside of visual attention. Yet introspection outside of attention will be effectively guessing. Since inattentional blindness requires that attention is engaged elsewhere than the critical stimulus (that being the point of the experiment), we should expect subjects to be reduced to guessing outside of what they were tasked to attend to. If they notice or remember the gorilla, then attention was automatically engaged. Limits on attention limits introspection, so drawing any conclusion about consciousness outside of attention is perilous.
Introspection is typically not simple, but complex. Where simple introspection relies on just one attentional channel (perception), complex introspection relies on multiple channels. There’s the rub. Theory-ladenness, for example, would be automatically drawing on beliefs in one’s hypothesis to bias reports. It also complicates issues about accuracy.
Here’s a question, answers for which have been used to adjudicate the metaphysics of consciousness:
Is your peripheral vision blurry?
Philosophers who have published on this uniformly assert that peripheral vision is not blurry. Weirdly, I think that it is. In my informal polling of audiences, subjects are split. There is notable disagreement. What is going on?
Well, how does introspection work here (Chp. 7.5)? Not as a uniform internal spotlight of attention. I propose a different hypothesis, informed by a proposal about how we acquire the concept of blur. We identify blur by contrasting clear from blurry vision. Consider when one is refracted to correct myopia. The optometrist has us view a letter under two different lenses. Introspecting the contrast between the cases, we recognize one as blurry (or blurrier than) the other. This is a working memory task. We hold one glimpse in mind and comparing it with the present glimpse. So, we have complex introspection, drawing on a perceptual and a memory channel. It is arguably reliable since optometric interventions are (glasses work). Here’s a diagram.
Figure: Complex introspection where subjects compare memory to present perception to make judgment about blur. © Wayne Wu.
Now consider disagreements about blur. If introspection was an internal beam of attention, then is it that some use it badly, others use it well? Is it that peripheral visual experience varies across the population? Perhaps. But an alternative is that if introspection is complex, different individuals will bring different assumptions (channels) into the mix, and this can alter judgments.
In experimental psychology, we control for unwanted variability in different ways. If our introspective data is so noisy, we cannot assume that we can take a sample of consciousness from the armchair and then construct a theory of that phenomenology. We must be as rigorous in introspection as scientists are when doing psychophysics. Arguably, we are not, and this casts a shadow over much introspection in the theory of consciousness.
We need realistic models of the psychology of introspection. Here, as in any domain of human agency, we need a theory of agency and a theory of attention…and I know where you can find one.