Caitlin Mace & Adina L. Roskies:  Vehicle realism, content pragmatism – Uneasy bedfellows 

Title: Vehicle realism, content pragmatism: Uneasy bedfellows 
Caitlin Mace and Adina L. Roskies 

Frances Egan’s (2025) “Deflating Mental Representation” poses an interesting challenge to both realists and fictionalists about mental representation. In the book she argues for a pragmatic view of representational content and realism about representational vehicles.  

The book as a whole is a deflationary take on the primary ways in which the concept of representation enters into cognitive science: in scientific theorizing about the brain, in folk psychological discourse, and in perception. In all these cases, Egan denies that there is a substantive representation relation which holds between a mental state and what it is about, which suffices to pick out exactly the content of the mental state. Her position is that we should take a deflationary view of mental representation. However, because such terms are used in the study of mind and the prediction of behavior and description of mental states, we cannot do away with them wholesale. Instead, the use of intentional terms is relegated to a pragmatic “intentional gloss” that helps us with our projects but does not describe anything real. 

Egan’s view of neural computation posits transformations over vehicles of representation, which are structures defined over physical states of the system in question, here the brain. The mapping between the physical brain and some relevant aspects of causal structure is described by a realization function, picking out the vehicles which, as parts or processes of the brain, are real. An interpretation function then allows a mapping from those vehicles to mathematical functions, which describe a particular computation performed by the system. An intentional gloss is needed to relate these computational states to the outside world – the interests of the experimenter, the environment, etc. Crucially, Egan takes the intentional gloss to lie outside the bounds of scientific theory: it is important to have it to relate the computation to the world, and to explain what scientists do, but it is not part of the “theory proper”. For example, the intentional gloss 1) links the computational story to the intentionally characterized phenomena being explained; 2) selects what is important in the causal chain given the scientist’s goals; 3) can be a placeholder for future discoveries. Despite the importance of these roles, Egan maintains that it is not really a part of the scientific theory. This preserves the naturalism of the computational theory, relegating intentional attribution to the scientist rather than to the phenomenon. “The basic point here, which cannot be overemphasized, is: a computational theory’s commitment to meaning and intentionality is merely apparent. In making use of pragmatic content, a computational theory incurs no explanatory debt that needs to be discharged and so no debt that it is the job of philosophers to discharge.” (Egan 2025, p. 51). Thus, the attribution of content to the structures and transformations is only a gloss supplied by the scientist, strictly speaking lying outside the scientific theory. 

We argue that the identification of vehicles in neuroscience is made difficult by radical vehicle indeterminacy, and for someone like Egan it is especially problematic: Vehicle identification is dependent upon content assignment in such an intimate way that content cannot be relegated to an extra-theoretical role.  

Vehicles, on Egan’s account, are concrete, physically realized structures. Their physicality realizes causal powers, and the causal powers relevant for cognitive science will be those involved in cognition. The aspects of representations that we should be realists about, for Egan, are the structures themselves, the algorithms that maintain the structures, the computations performed by the structures and the mathematical functions that are computed. What makes the causal structure a vehicle rather than mere causal structure is that the vehicle plays a role in instantiating some part of a mathematical function in virtue of computing some output. The mathematical functions and computations are not a gloss, only the content ascribed to them. We think the following is problematic for Egan: in the blooming, buzzing confusion of signals that is the brain, the representational contents or glosses that Egan identifies play an ineliminable role in the practice of that science.  

Vehicles do not come pre-packaged; they do not wear their identity on their sleeves. Indeed, the project of identifying vehicles is notoriously difficult and problematic, and it has been argued that attempts to identify them are plagued by indeterminacy (Mace 2025, Roskies 2025). To identify computational vehicles, experimenters look for neural signals whose activity appears to be correlated with some experimental variable taken to be the representational content, such as the onset of a cue, a stimulus feature, or some aspect of behavior. The vehicle is provisionally taken to represent that experimental variable and some kind of computational transformation is posited. There is an iterative refining via continuing negotiation of the vehicle, the representational content ascribed, and the function attributed. This iterative process may change the experimenter’s conception and/or identification of the vehicle itself – for example, prompting a change from a rate code to a temporal code, or from a cellular to a population or circuit construct. To us it seems disingenuous to relegate the content ascribed to the vehicle to something outside the theory proper, as an extra-theoretical gloss, given how central that content attribution is to the identification of vehicles, computational properties and the mathematical correspondence between structure and function. Egan herself accepts that intentional glosses are indispensable – where we differ is on whether or not they fall within the purview of the theory proper. Relegating content to being a gloss outside the theory seems motivated merely in order to avoid having to pay the intentional debt other theories find difficult to discharge. Also, insofar as Egan aims to give an account of the practice of computational neuroscience, content ascriptions are essential. While Egan will say the relation between the vehicle and experimental variable is not substantive, which we take to mean is outside the theory proper, this seems inconsistent with treating the representational vehicles as real. 

It is perhaps possible that, at the end of the day with a complete and adequate theory, one could give a purely computational story for what some identifiable subset of neural tissue is doing in some context in purely causal language, without making reference to representation or content. However, even if possible, building a theory of the science on that final result alone would ignore the essential role that the content attributions play in the scientific process. It seems just a verbal maneuver to slice off that practice from the rest of what the scientist does and deem it not in need of naturalizing, especially when the practice is so central to the identification of what Egan wants to be a realist about – the causal structures themselves. 

Given the challenges posed by vehicle indeterminacy, it may be that a thoroughgoing pragmatism about both vehicles and content will ultimately seem to be the most promising position to adopt. One also wonders whether a cognitive scientist should also consider the positing of an intentional gloss a legitimate object of inquiry, and if that will be able to be cashed out pragmatically, and without remainder. Indeed, we suspect that Egan’s view is unstable (see also Ramsey, 2020) — either it will collapse into a pragmatist view about representations, vehicles and content both (e.g. Cao (2022); Chirimuuta (2024)), or into a robust realism.  

Works Cited 

Cao, Rosa. 2022a. “Putting Representations to Use.” Synthese 200 (151): 1–24. 

Chirimuuta, Mazviita. 2024. The Brain Abstracted. The MIT Press. 

Egan, Frances. 2025. Deflating Mental Representation. The MIT Press. 

Mace, Caitlin. 2025. “The Vehicle Indeterminacy Problem.”  [Preprint] URL: https://philsci-archive.pitt.edu/id/eprint/26416. 

Ramsey, William. 2020. “Defending Representation Realism” In: What Are Mental Representations? Edited by Joulia Smortchkova, Krzysztof Dołęga, and Tobias Schlicht, Oxford University Press. 

Roskies, Adina. 2025. “Representational Vehicles, From Regions to Cells.” In Neurocognitive Foundations of Mind, edited by Gualtiero Piccinini. DOI: 10.4324/9781003458531-5 

  

  

2 Comments

  1. I think vehicle identification presupposes content attribution. While the mechanism functions independently of our attribution of content, without content the mechanism itself would be meaningless. The research process proceeds in precisely the opposite way to what Egan’s deflationism suggests: one begins with an intentionally characterized behavior or cognitive capacity and then searches for the neural correlates. Without this intentional precondition, the neural process would be meaningless. Content attribution is methodologically primary, not secondary. It is constitutive for the identification of the mechanism to be explained itself.

Ask a question about something you read in this post.

Back to Top