Symposium: Deflating Mental Representation

Join us this week for our first event of the year, another fantastic book symposium!  This time, we have Frankie Egan discussing her new book, Deflating Mental Representation.  We have three great commentaries from Oron Shagrir, Caitlin Mace and Adina Roskies, and Mazviita Chirimuuta, along with responses from Frankie.  Feel free to hop into the comments to discuss!

One comment

  1. I which to that Frankie Egan for her well-considered and interesting post. It raises some critical issues. Unfortunately, I must disagree on many points.

    “(1) Construing a mental (or neural) state as a representation does not presuppose a special, substantive relation (or relations) – what we can call a representation relation – holding between the state and what the representation is about, for example, the coffee cup in front of me, or Paris, or Napoleon as I think about them.”

    But, it does. The state is a representation because it can be, and generally is, activated by objects of the represented type – on in the singular case, of the represented individual – the coffee cup in front of me, or Paris, or Napoleon. This relation is not “substantial” in the sense of an existing activation. The relation is potential – the capacity to be elicited or activated by what is represented. Thus, it is substantial in the sense of being essential to the state’s ability to act as a stand-in for the individual or type it represents.
    If a neural state could not be activated by what it represents, it would be inapplicable. We would be unable to relate it to its instances when presented with them. Accordingly, it would be useless in the generation of appropriate responses to its potential instances, and so behaviorally useless.

    “(2) Representational content is not an essential property of mental states – that is, the very same kind of state may have had a different content, or no content at all.”
     
    No evidence supports the speculation that “the very same kind of [neural] state may have had a different content, or no content at all.” How, operationally, can we identify “the very same kind of state” to test this hypothesis? We might be able to show that the same digital state can represent different or no content, but that is because what a digital state represents can depend on what the human operator intends it to represent. A wave simulation could represent a sea wave, a sound wave or no physical wave at all, but a neural wave representation comes to be because of its subject’s personal experience and education. It is connected to the sensorium in such a way that encountering wave will likely activate it, while non-waves are unlikely to – and it is this specific potential activation that defines it as the representation of its potentially activating instances.
    Egan later hints at this when she writes of “robust correlations between distal property instantiations and tokenings of internal structures.” Of course, correlation is not causation. It is not just that certain neural states are correlated with distal property instantiations. Earlier, the sensing and subsequent processing of such properties cause the neural states that will subsequently be activated by other tokens of the same type – making those states be the representations they are.

    “(3) Content attribution to mental states is always pragmatically motivated. It serves to characterize, for certain uses and purposes, features of the mind that are not themselves intentional. As I put it, content serves to gloss these features for the intended purposes.”

    This is the genetic fallacy. The truth or falsity of content attribution to mental states is indifferent its motivation. I am puzzled by the claim that what we consider representational features of the mind “are not themselves intentional.” According to Franz Brentano, the defining feature of intentionality is its “aboutness,” and some mental states are clearly about those states of affairs having the power to activate them. This capacity is not a gloss, but essential to the applicability, and so to the contribution to fitness, of mental states and processes.

    “Computational neuroscience and psychology aspire to provide a foundation for the study of mind; accordingly, they should not presuppose representation or meaning. They aim to characterize the causal mechanisms underlying our cognitive capacities. Computational theories model the causal processes underlying cognition as mathematical processes.”

    This approach leads to Chalmers' hard problem of consciousness. As I have argued in “The Hard Problem of Consciousness & the Fundamental Abstraction,” the physical sciences begin with a fundamental abstraction precluding the reduction of mental to physical process. The abstraction fixes on the known object and its processes, and prescinds from knowing subjects and their processes. As physical scientists we (rightly) care about what Galileo saw, not the mental processes by which he cognized what he saw. Consequently, physics (broadly considered) lacks the data and concepts to discuss mental operations. Because physical science lacks intentional effects, it cannot devise a chain of argument terminating in the explanation of intentional effects such as meaning.
    Cognition’s causal mechanisms occur both in the intentional and in the physical theaters of operation. So, a methodology limited by the fundamental abstraction is doomed to failure, as the history of the hard problem shows. A geometric space lacking basis vectors in a certain dimension cannot represent objects in that dimension. So it is here. We need both intentional and physical insights to understand psychosomatic phenomena.

    Listing a number of physical factors, Egan claims that “These facts explain why computing the specified function(s) suffices, in that environment, for the successful exercise of the cognitive capacity to be explained.” What these factors explain is how neural processing might explain manifest behavior – an important area of research to be sure. What they do not explain is the cognition as a mental or subjectively experienced process – and this leads us to a deep semiotic confusion underlying discussions of mental representation.
    In his Ars Logica João Poinsot (aka John of St. Thomas) distinguishes instrumental and essential signs – concepts generally confused in contemporary philosophy of mind. An instrumental sign is one whose intrinsic nature must be grasped before it can signify. For example, we must grasp that the smudge on the horizon is smoke rather then dust before it can signify fire, that the red octagonal object is a street sign before it can signify stop, or that “apple” is a word before it can represent the fruit. This is because, intrinsically, instrumental signs are something other than signs – combustion products, painted sheet metal or ink on paper.
    On the other hand, when we think <apple> we do not first have to realize that <apple> is an idea and then determine its meaning. We simply think of apples. If we come to know that <apple> is an idea, it is retrospectively. We know we thought of apples and call that process “having an idea.” While instrumental signs can do more than signify (say scatter light), all mental contents can do is signify. Since their whole essence is being a sign, they are called essential signs. Since signifying is not a physical operation, essential signs do not operate in the physical, but in the intentional, theater of operations.
    Of course, thoughts are supported by neural states. Still, thoughts are not neural states. We are unaware of most neural states. Further, neural states do not function as instrumental signs. We do not apprehend neural connectivity or firing rates and thereby understand the encoded contents. Rather, we understand the encoded contents directly, intentionally rather physically.

    We must recognize that while neural states and processing are essential to thought, they are not mental representations.

Leave a Reply to Dennis F. PolisCancel reply

Back to Top