I am thrilled to introduce our first symposium in a series on articles from Neuroscience of Consciousness, on Nicholas Shea and Chris Frith’s “Dual-process theories and consciousness: the case for ‘Type Zero’ cognition.” We have three excellent commentaries on the paper, by Jacob Berger, Nick Byrd, and Elizabeth Schechter, along with a response by the authors, each of which can be linked to below.
***
According to some, consciousness primarily interrupts well-tuned subconscious processing, leading to decreased performance on tasks that are more effectively done without conscious deliberation. Others argue that conscious reflection helps us overcome biases in our thinking, and even that some types of tasks require consciousness. This second camp is often populated by “dual systems” theorists, who argue that cognition comprises two systems: a fast, heuristic, non-conscious one; and a slow, deliberate, conscious one.
Shea and Frith argue that the dual systems perspective fails to adequately carve the types of cognitive processing, and a better taxonomy can overcome the air of paradox in the “consciousness hurts” versus “consciousness helps” results. They make two key moves towards this end. The first is to pull apart two distinctions that are run together in the dual systems position: conscious versus non-conscious representations on the one hand, and automatic versus deliberate processes on the other. The second is to deny that we should be looking for tasks that necessitate consciousness, and instead think of consciousness as facilitating certain kinds of mental functions.
Shea and Frith’s taxonomy explicitly posits three types of cognition. “Type 0,” on their view, consists in automatic processes occurring over non-conscious representations. These processes tend to have small domains of operation and be extremely well-tuned to those domains. Examples include learning probabilistic action-outcome contingencies, as well as motor control, for which consciousness can indeed be a hindrance. If one is a skilled typist for instance, consciously attending to one’s movements tends to make performance worse.
“Type 1” cognition, which is the type not recognized by dual systems theorists, consists in automatic processing of conscious representations. Conscious representation, according to Shea and Frith, allows for interactions between different domains of information. These connections are facilitative in, for instance, recognizing patterns and coordinating behavior, but come with a cost. Without defined domains, problem spaces are poorly constrained, and this promotes heuristic reasoning. As is well-known from the heuristics and biases literature, heuristics can be tricked into producing wrong answers. Shea and Frith agree with dual systems theorists that “Type 2” cognition, which is deliberate processing of conscious representations, can play a role in overcoming such biases.
Hence, consciousness can both help and hurt, and which occurs depends on the context of reasoning and the problem being solved. One interesting corollary to Shea and Frith’s proposal is that we may be able to think of deliberative Type 2 reasoning processes as consciously controlled sequences of Type 1 processes. A second is that there is another possible type of cognition, involving deliberate processing over non-conscious representations. Shea and Frith suggest that this type may in fact never be instantiated, and if that turns out to be the case, we will have learned something interesting about relationship between consciousness and deliberation—namely that deliberative reasoning requires the representations it manipulates to be conscious.
***
Thanks very much to the authors and commentators for participating. Thanks also to Jakob Hohwy and the other editors of Neuroscience of Consciousness, and to Oxford University Press. Please feel free to comment in the discussion board below!
Jacob Berger, Commentary on “Dual-Process Theories and Consciousness: The Case for ‘Type Zero’ Cognition”
Elizabeth Schechter, Consciously representing and deliberate processing: Comment on Shea and Frith’s “Dual-Process Theories and Consciousness”
Nicholas Shea and Chris Frith, Reply to Commentaries
Thanks Daniel/Brains Blog, and thank you Shea and Frith for your reply. I’ve been puzzling over the Soto et al. (2011), the Sklar et al. (2012), and the Sackur and Dehaene (2009) cited in the authors’ reply (the first of these was also cited in the original article I realize). These are really fascinating studies.
I have about 20 questions, but I wonder if I could start by asking a single simple clarification questions. (And I’d welcome a reply from anyone who has a thought on how to answer!)
Shea and Frith write in their reply that an “…. overall type 2 process…. is not automatic if it consists of a series of automatic steps between intermediates that are conscious.” Now as noted in the original article the inputs and outputs of a type 1 process are conscious. So is the difference between type 1 and type 2 process that the intermediate representations of a type 2 process are also conscious, or rather that there *are* intermediate representations? I believe it is the former. But then does type X cognition differ from type 0 in terms of its involving intermediate representations, only non-conscious ones this time? (I could see why the authors might not want to commit themselves to this, and why they prefer to lean on the notion of load sensitivity instead.)
Thanks everyone for this interesting discussion. One question I had throughout concerned the proper characterization of a ‘deliberate’ process, and accordingly the proper way to understand a deliberate/automatic distinction (I take it many of y’all share this question).
In connection with that question, can I ask how (if at all) this distinction might map onto a distinction between inferential (or broadly rational) processes and merely associative processes? The connection between type-2, deliberate, and model-based reasoning, and between type-0, automatic, and model-free reasoning suggests some connection, but I’m not clear on its contours.
So, e.g., I’m not clear on how type-1 fits in to the inferential/associative contrast, nor on whether type-0 inferential processing – regarding which, Mandelbaum’s Nous paper ‘Attitude, Inference, Association’ seems relevant – would be said by the current discussants to exist, or to qualify as ‘deliberate.’
Hi Josh,
I’ve got a similar question from the other direction. I’m still not sure I fully understand what ‘inference’ is in papers like Mandelbaum’s. Here’s an attempt to answer both my question and your question.
What do we mean by ‘inference’?
We don’t get a “By ‘inference’, I mean…” clause from the Mandelbaum paper, but we do get the following implicit hints:
“logical, propositional processes—inferences and not just associative transitions—working over propositional structures.” (12)
“inferences are sensitive to the strength of arguments and weights of evidence.” (13)
So inferences involve both processes and structure. The processes are logical and propositional. The structure is also propositional. And — although I don’t understand how — something about inference is supposed to allow for weighing the strength of arguments and evidence. Maybe someone can fill in the missing parts of this account of inference.
Are Type-2, Type-1, Type-0 associative or inferential?
We might need to know more about ‘inference’ to answer this question.
Must inference involve conscious inputs/outputs? If not, then inference could be Type-2, Type-1, or Type-0 (as far as I can tell).
But must inference involve deliberate processing? If not, then (again) inference could be Type-2, Type-1, or Type-0.
If the answer to either of these questions is supposed to be ‘no’, then I suppose that I would want to know why. After all, it seems obvious to me that I make logical/propositional inferences without consciously representing the relevant inputs or outputs and without deliberately processing anything — and I imagine that Mandelbaum would agree given that he thinks phenomena like implicit biases (which often don’t seem to involve consciously representing all of the relevant inputs/outputs and deliberate processing) are inferential. And surely I draw inferences in the concscious-and-deliberate sense denoted by Type-2 cognition.
This, of course, wouldn’t mean that all Type-2, Type-1, and Type-0 cognition is inferential. It would mean only that it can be. After all, evidence suggests that Type-1 and Type-0 cognition can be associative (due to the fact that associative interventions sometimes make much more of a (or a more reliable) difference on such cognition than logical or propositional interventions) — see Devine, Forscher, Austin, and Cox 2012.
And if that is right, then one and the same Type-2, Type-1, or Type-0 cognition could (in principle) involve both associations and inference. This might fly in the face of one of Mandelbaum’s (suppressed?) premises — i.e., if a behavior seems to involve inference, then the behavior cannot also involve association — but I think that this upshot can make sense of the otherwise puzzling debiasing findings that inspire papers like Mandelbaum’s. But that is the topic of a different paper.
And here’s a weaker upshot: one and the same *behavior* (or change in behavior) can involve both associations and inference.
This weaker upshot should have the same explanatory power for puzzling debiasing findings that I have in mind.
Slowly working through these interesting issues and haven’t read all the comments, but since S&F (Nick and Chris) are focusing in their response on Type X processes, two thoughts and an aside which I hope are on point with the current drift in the discussion:
1. With Josh and Nick in the comments above, I’d like to press on the deliberative process issue. Why can’t we have a more mechanistic conception of such processes? Cognitive load effects are not, I take it, being offered as grounding a sufficient condition but only as indicative of Type 2.
One simple model would be that one type of deliberative process involves a top-down effect where goal representations (e.g. encoded in one’s intention) play a causal role in the unfolding of the process in question. I mention this because S&F allow that endogenous attention (often called goal-directed or voluntary attention) is an example of a type 2 process and what makes attention endogenous is that it is set by goals/task sets or other equivalent states (i.e. such attention is not stimulus driven). Mechanistic models of endogenous attention seem to me, when interpreted correctly (hah), to emphasize this influence. I can imagine that such a model will ultimately explain why cognitive load effects are often found in Type 2 processes, but that such correlation is not necessary. In the end, endogenous attention might be on the border of the set of type 2 processes, the more typical of which might involve more complicated processing at the cognitive level (e.g. explicit reasoning); for endogenous attention, just being in a certain cognitive state (intention), is what secures a “deliberative” influence.
2. On Type X or 0.5 processes. Once you allow that endogenous attention is deliberative and you take on a common view that the dorsal visual stream is unconscious (which I’m not sure I believe anymore despite having argued for it in Zombie Action), then arguably, visual attention to motor relevant visual properties in reaching movements, as extensively studied by many, involves goal-influenced operations over unconscious visual representations that must be selected to guide motor movement. You don’t process all dorsal stream information during intentional action, but selectively process only the task relevant information.
The deliberative aspect is secured, as it presumably must be in the endogenous attention cases S&F have in mind, by the idea that goals play a crucial role in setting and regulating dorsal stream selection, and over the course of an action, such selection remains sensitive to goal and environment (cf. also the programming of saccades during action which shift with task demands involving SC, FEF and parietal areas…we have clear evidence that saccadic patterns are goal responsive but most of us are barely aware of the number of saccades we make let alone their pattern). You might say this processing over unconscious representations is clearest in patient DF but if DF is meant to provide a model for normal zombie action (dorsal mediated action), then normal subjects will have type X processes involved every time they perform motor actions, again, understood as goal regulated processing of unconscious visual information relevant to task performance.
3. Aside: we should stop talking about controlled/deliberative and automatic processes. As S&F note, there is a continuum. I think there are definitions of the notion that can capture this.
First off, I want to thank The Brains Blog for hosting this wonderful symposium—and thank Shea and Frith again for their paper and thoughtful reply to our commentaries. There are a lot of deep and interesting issues cropping up already here, so I don’t mean to pile on, but I did want to follow up a bit about S&F’s response to the possibility that Unconscious Thought Theory may be evidence of Type 0.5/X cognition that I mentioned in my commentary.
S&F grant that there could be cases where distracted participants perform better on certain tasks than participants that have an opportunity to consciously reason, but they urge that this could be because consciousness entails a reduction in information about target stimuli, not because of unconscious deliberative reasoning.
I agree that this is a possible explanation of the difference between the conscious-reasoning and distraction (the supposed UTT) conditions, but, unless I’m mistaken, it can’t be the whole story. In typical UTT studies, there is a third condition, wherein participants render quick decisions after consciously processing the stimuli—and it is often found that the participants in the distraction conditions perform equally well or better than those participants too. So even if consciousness or conscious reasoning has a reductive effect sometimes, I’m not seeing how that could explain the discrepancy between the distraction and snap-judgment conditions.
Naturally, the advantage in the former could be, as S&F suggest, because of additional helpful unconscious, but automatic, processing—but I’m not so sure. (Incidentally, Dijksterhuis & Strick (2016) offer another possibility: that distraction offers an opportunity for irrelevant information to be forgotten. But they maintain that they can control for this possibility with a “mere distraction” condition; see pp. 123-124).
I agree that whatever processing taking place in the distraction condition is not step-by-step reasoning in the sense that different stimuli dimensions are consciously presented sequentially and integrated. But it still seems open that participants are engaging in unconscious step-by-step inferential processes during the distraction period. I take it the question for S&F, then, is whether or not such processing is sensitive to load. Clearly, such processing is not particularly sensitive to the load placed on it by the distraction tasks, but such load is (always, I think) task-irrelevant (by design—to avoid interfering with the unconscious thought). I would guess, however, that task-relevant distractors might have an effect—but, as far as I know, that’s something that has yet to be studied. Would finding evidence like that help tip the scale?
Hi All,
Sorry to be slow joining the party, and thanks for all the interesting comments. Very briefly, I’d like to pick up on Wayne’s suggestion that we should talk in terms of continua rather than discrete types. I wonder if this doesn’t ameliorate some of the concerns about Type X condition, but also make S&F’s proposal less revisionary than it is pitched. I’m thinking of the following two claims.
1. The “constitution hypothesis” (as Lizzie helpfully phrases it in her commentary)–deliberative processes are controlled (?) sequences of Type-1 processes.
2. The notion that deliberation and consciousness are at least tightly linked. (The question of Type X, I take it, is largely a question of how tight this link is.)
It seems to me that if you buy both of these claims we can get something like a continuum running from, on the one end, low-control, low-complexity associations, which would be minimally (?) conscious, and high-control, complex sequences of Type-1 stuff, which would be paradigmatically, Type-2 conscious.
If this is the view, then I wonder whether what we end up with is not a quaternary taxonomy, but a more standard Type-1/2 distinction with fuzzy boundaries. Two last quick points. First, I don’t know whether control might be something like the missing ingredient Lizzie mentions. Second, perhaps we could think of the Dijksterhuis scenarios as ones with multiple Type-1 processes that are only loosely or not-controlled.
Thanks all for the great discussion!
Many thanks to all for this helpful discussion. The main focus has been the deliberate-automatic distinction, so I’ll try to say something useful about that.
We wanted to remain as neutral as we could on how to draw the distinction between deliberate and automatic processing. There’s a very large literature on this of course. We want to argue that, however that important distinction should be drawn, it can’t be simply in terms of consciousness, and indeed the conscious vs. non-conscious contrast concerns representations rather than processes directly. I don’t have anything very substantial to say in how the deliberate-automatic distinction *should* be drawn.
Lizzie is right that, if it turns out that deliberate processing requires conscious representations, and that is explained by the “constitution hypothesis”, the minimal difference between type1 and type 2 processes is that deliberate processing involves intermediate representations that are conscious. However, that obviously doesn’t work if type X processing exists (deliberate processing of non-conscious representations).
Type 2 could also involve a further element, e.g. control, of some tightly-specified sort. If there is no type X, and type 2 involves a further element beyond conscious intermediate representations, then the 2×2 scheme would leave an empty box, so the type 0/1/2 relation is better conceived as a hierarchy. It could certainly come in degrees, as Dan suggests (and then some less categorical-sounding terminology would be more appropriate, following Wayne).
Josh and Nick raise interesting questions about how all this is related to the inferential-associative distinction. I’m also not sure how the inferential-associative contrast should be drawn, nor how it relates to the deliberate-automatic contrast. If inferential processes are just those transitions that take place over conceptually-structured representations, then there is no obvious reason why that shouldn’t be completely orthogonal to both our other distinctions. It would be a substantial finding if conceptually-structured representations could only be processed deliberately (although that seems unlikely in the light of effects like semantic priming). In answer to Josh, it’s not obvious that an inferential-associative distinction aligns with the model-based vs. model-free contrast, at least as it is operationalised in the experiments we cited.
Wayne brings in the important issue of the way a cognitive process is controlled by a goal. Being influenced by goals is unlikely to align with deliberate reasoning, since having a goal in mind can affect the way automatic processes unfold (as in the motor control experiments we cited, e.g. Fourneret and Jeannerod 1998). The effect of cognitive load will be important in assessing the issue, as will be other features thought to lie in the respective clusters (type 1 vs. type 2), leaving aside consciousness.
On Jake’s point, I don’t want to put too much weight on the unconscious thought effect, given questions about replicability, but I certainly agree that the effect of load here is a crucial question.
These are really good places to press and things I want to think more about: “more research needed” – although given the wealth of data on these topics already out there, what I should really conclude is: more research _into the literature_ needed _by me_.
Many thanks to again to all.
Nick