Frankie Egan: Brains Blog precis of Deflating Mental Representation 

Brains Blog precis of Deflating Mental Representation 

In the book I propose what I call a deflationary account of mental representation, characterized by three claims: 

(1) Construing a mental (or neural) state as a representation does not presuppose a special, substantive relation (or relations) – what we can call a representation relation – holding between the state and what the representation is about, for example, the coffee cup in front of me, or Paris, or Napoleon as I think about them. 

(2) Representational content is not an essential property of mental states – that is, the very same kind of state may have had a different content, or no content at all. 

(3) Content attribution to mental states is always pragmatically motivated. It serves to characterize, for certain uses and purposes, features of the mind that are not themselves intentional. As I put it, content serves to gloss these features for the intended purposes. 

The common idea in the various domains in which mental representation figures – as an explanatory posit in the sciences of the mind, in our folk psychological practice of predicting and explaining each other’s behavior, and in our ordinary thought and talk about perceptual experience – is that content provides a way of modeling mental states for a variety of purposes. 

Since this is the Brains Blog I will focus on the role of mental representation in science, in particular, in the computational cognitive sciences. Computational neuroscience and psychology aspire to provide a foundation for the study of mind; accordingly, they should not presuppose representation or meaning. They aim to characterize the causal mechanisms underlying our cognitive capacities. Computational theories model the causal processes underlying cognition as mathematical processes.  

A computational description is an abstract characterization of the causal organization underlying a cognitive capacity. The point is easiest to see if we consider a very simple example, a physical system with the capacity to add. The relevant causal organization responsible for the capacity is characterized, in a compact and perspicuous way, by mapping physical states – say, populations of neurons firing – to the arguments and values of the function computed, here addends and sums. In general, the mapping supports a representational characterization: the input and output states of the system can be said to represent the arguments and values, respectively, of the computed function. So I call these mathematical contents.   

Examples are numerous. Perceptual systems compute smoothing functions (e.g. the Laplacian of a Gaussian distribution) to eliminate noise in order to, say, identify edges in the scene. A smoothing mechanism in early vision takes as input light intensities at points in the image and gives as output the rate of intensity change over the image (Marr 1982). Its inputs represent intensity values and its outputs rate of intensity change. The motor control system uses visual and proprioceptive information and computes vector subtraction, enabling the organism to grasp an object in view (Shadmehr and Wise 2005). Inputs represent vectors and outputs their difference.  

The point of the computational characterization, again, is to characterize the causal organization underlying the capacity. As I’ve said, causal processes are modeled as mathematical processes, in other words, as computations. There is little point in asking whether the structures really represent specific vectors. The representational construal is simply a consequence of characterizing the mechanism as computing the specified function, and this computational characterization is justified insofar as it succeeds in characterizing the causal organization responsible for the target capacity. The representational construal has a pragmatic rationale, as does all content attribution on the deflationary view. 

Characterizing the mechanism underlying a cognitive capacity involves more than just specifying the function computed. A complete computational characterization will specify the algorithms involved in the computation of the function(s), the structures that the algorithm maintains, the computational processes defined over these structures, and ultimately how these 

structures and processes are realized in neural matter. It will also specify what I call the ecological component of the theory – general facts about the organism’s normal environment, including robust correlations between distal property instantiations and tokenings of internal structures. These facts explain why computing the specified function(s) suffices, in that environment, for the successful exercise of the cognitive capacity to be explained.  

Importantly, these structures are characterized in non-intentional terms, and the processes defined over the structures in causal terms. Theorists may use representational talk – they may, for example, characterize a structure as representing an edge – but talk of a neural population’s activation representing its distal stimulus conditions is a gloss on the purely causal account given by the theory. The representational gloss is justified in part on pragmatic grounds, by how well it serves the theorist’s various explanatory interests and goals. I will mention only one function of an intentional gloss here. In the book I discuss several additional functions.  

A common function served by the attribution of content to mental states in the various domains in which content is attributed it is that it allows the states to be assessed for accuracy. This point is hardly controversial, but on the deflationary view mental states – I am using the term broadly here to include neural states – have meaning and accuracy conditions only relative to a pragmatically motivated gloss.  

One reason to attribute semantic properties in a gloss is that the cognitive capacities that are the explanatory targets of computational neuroscience and psychology are typically characterized in intentional terms, for example, an organism’s knowing the 3-D structure of the scene, or knowing the location of an object in view. Attributing content to an internal structure – construing a structure constructed in early visual processing as representing an edge – allows the processes described in purely causal terms by the theory to be partitioned into accurate runs on the one hand (when it is caused by its normal stimulus conditions) and as mistakes on the other (when it is not). Representational talk is the ‘connective tissue’ linking the causal processes characterized in the computational theory and the manifest, intentionally characterized, capacities that are the theory’s explanatory target. The intentional characterization is useful, given the way we pretheoretically think of cognitive capacities, in terms of representing and occasionally misrepresenting aspects of the world. But, as explained above, what I call the ecological component of the theory suffices to explain the successful exercise of the capacity without construing it as representational.  

References 

Marr, D. (1982). Vision. Freeman. 

Shadmehr, R. and Wise, S. (2005). The Computational Neurobiology of Reaching and Pointing: A Foundation for Motor Learning. MIT Press. 

One comment

  1. “Gloss” is polysemous. There’s gloss as in what covers a piece of pottery. A thin veneer that makes the cruder fundamental substance more legible, unmistakable, portable and resilient. This former kind is always on; it’s applied to the whole substance systematically. But then there’s also “gloss” as in retrospective commentary, explication that dips in as needed to clarify or make explicit parts that prove to more pragmatically important to get precisely than the first draft of the text thought was needed at the time (cf. retrospection upon the fruits of predictive processing).

Ask a question about something you read in this post.

Back to Top