Comment on Deflating Mental Representation by Frances Egan
A “Left Sellarsian” Response to a “Right Sellarsian” Proposal
“Now the idea that epistemic facts can be analysed without remainder—even ‘in principle’—into non-epistemic facts, whether phenomenal or behavioural, public or private, with no matter how lavish a sprinkling of subjunctives and hypotheticals is, I believe, a radical mistake—a mistake of a piece with the so-called ‘naturalistic fallacy’ in ethics”
(Sellars Empiricism and the Philosophy of Mind: §5)
I found much to admire and to agree with in this book. I have long suspected that there is something amiss with the project of naturalizing intentionality, that its prospects may be doomed, but I had not come up with a way to voice this concern. I am therefore very sympathetic to this aspect of the book, and yet I will argue here that Egan is a little too optimistic about the situation we are left with, with the alternative scientific project of reconstructing intentionality.
There are a few tantalizing references to Wilfrid Sellars in the book. His adverbialist theory of perceptual experience is one of the inspirations for the external sortalism presented in the final chapter. More broadly, Egan’s project is one of reconciling the manifest and scientific images. Rather than the path taken by the naturalizers of intentionality, in which the aim is for there to be a reduction of intentional relations in the manifest image to causal relations in the scientific image, by deflating intentional relations as merely a pragmatic gloss on certain kinds of causal explanations, a gloss that humanizes the scientific image (§2.4) without taking on any ontological commitments, we have a reconciliation of the two images without need to satisfy the strict, unmet demands that naturalization requires.
Another Sellarsian theme is that Egan emphasises how the point of explanation in cognitive science is to place phenomena in the space of causes. The mathematical structures of computational explanations are, ultimately, ones that track causal relations in cognitive systems. More inflationary accounts of mental representation have tended to muddy this, by incorporating items from the space of reasons (where intentional notions belong) into their accounts of scientific explanation.
But once we appreciate that it is a precondition of scientific explanation that it is restricted in this way (Sellars was a Kantian, after all), a problem arises for Egan’s proposal to leave to science the task of reconstructing intentionality. Here is Egan’s proposal:
I suggested that we understand naturalizing intentionality as requiring an explanatory account of the mechanisms responsible for the processes that we pretheoretically refer to as representational processes, processes that secure our “rich and varied grip on the world” (Clark 2015, 7). I have argued that this is precisely what computational psychology and computational neuroscience promise to provide. In doing so, these sciences characterize a sense in which the subject’s activity is directed at relevant aspects of the world. Directedness is not an intentional relation; it doesn’t yield determinate content and correctness conditions. However, when an account of the causal processes that support directedness, given by the computational theory proper, is supplemented by an appropriate intentional gloss, we do get determinate content and correctness conditions—that is, we get full-blooded representation. We get what I am calling a reconstruction of intentionality. (p.69)
The problem is that if it is a precondition of scientific explanations of “representational processes” that they refer only to “causal processes”, then the failure of scientific explanations to account for our pre-theoretic notion of representation is no reason to think that that notion is ontologically any worse off than the scientifically established notions. Yes, we can be deflationary about the ontological commitments of scientists when they interpret their computational models in representational ways; but that does not mean we have to be deflationary about the representational notions that exist autonomously within the space of reasons. We definitely need not be eliminativists.
At one point Egan’s sympathies to eliminativism show up. If she has to choose between the ontology of the manifest image, and that of the scientific image, it’s the manifest that will have to go:
There is a sense in which eliminativists such as Stich and Chomsky are right. If our cognitive capacities are to be naturalistically explained, then such notions as representation and content will eventually drop out of the account. A naturalistic theory must discharge any debt to intentional notions. On the deflationary account, the theory (proper) has no such debt to discharge. Rather, it is our commonsense understanding that has such a debt, one that the theory shoulders, but the theory needs the intentional gloss to show that the debt has been settled. (p.73)
But an autonomous scientific image cannot justify any ontological expansionism. Its assumption is that relations are causal and not intentional. It is not a discovery of science that this is the case, and therefore that intentional relations are ontologically questionable when causal ones are not. If the space of causes is not to be interfered with by the space of reasons – because of the problems that occur, as Egan shows us, when we try to naturalize intentionality and start running interference between them – then, likewise, the space of reasons has the right to be left alone by the space of causes.
I agree with your critique and your argument that the space of causes cannot justify ontological eliminativism about the space of reasons. I also appreciate your analysis of how the computational framework naturalizes its own metaphor.
I would like to add to your analysis. Beyond the problem of simplification and ignoring disanalogies, many consciousness theories commit systematic category errors. They attempt to causally link logically incommensurable levels of description, treating first-person and third-person perspectives as if they could stand in causal relations. When theorists notice how this generates the mind-body problem or the hard problem, they attempt to fix it through various maneuvers, dual-aspect monism, extended mind, and others, that reify consciousness without success.
Some theories conjure up disanalogous metaphors, like Predictive Processing and the Free Energy Principle, which equate concepts from fundamentally different categories (thermodynamic, epistemic, phenomenal).
Your autonomy thesis aptly shows what neuroscience cannot do. But I think we also need to develop an account of the constitutive conditions under which systems become conscious, by reconstructing the conditions of the autocatalytic system called life under which consciousness emerges. Consciousness should not be a label, but the result of this reconstructive process.