Biorepresentations: necessary but not sufficient?
What follows is a working definition of how some people in neuroscience use the term ‘representation,’ and a brief examination of that usage.
Definition of biorepresentation:
A biorepresentation is an internal structure whose role is to carry information about what is happening in the world, a structure that is used by downstream processes to guide behavior with respect to those bits of the world that it carries information about.
1. In the leech there is a simple behavior known as the local bend response (LBR). Touch the leech at a medium pressure, and it will bend its body away from where it was touched. The LBR seems to be a very simple local escape mechanism. There are sensory neurons (P cells, where P stands for ‘pressure) in the leech nervous system that carry information about touch location. The P cells synapse onto a downstream network of neurons that “read out” the signals in the P cells, and control the behavior in the appropriate way, i.e., to move the leech away from the location of touch.
Using the above definition, P cells represent (or biorepresent: I will often leave off the prefix) touch location –they carry information about touch location, and downstream neuronal processes use them to guide behavior with respect to touch location. (As a side-note: is it possible for the P cells to be “wrong” in some sense? Sure, if the leech isn’t touched, and they fire action potentials, then they are misrepresenting the state of the world at that time.)
2. The primary visual cortex (V1) of monkeys carries information about what is happening in the ambient electromagnetic world, about where and what various objects are, in addition to a zillion other things. There are all sorts of downstream neurons that make use of the activity in V1 to guide behavior with respect to those things in the world. Therefore, V1 represents those aspects of the world, based on the above definition.
1. Why use this word?
In what sense does this usage of the term ‘represent’ hook up to more esoteric senses as you might find in a philosophy of mind course?
For the record, I don’t think using a term in science needs defense: we use terms that are close to the meaning we want that already exists in ordinary language, and this helps us think about the phenomenon (we don’t think much about words or concepts, but are much more interested in the thing we are studying), and to communicate our theories to nonscientists. However, I’ll defend it nonetheless.
For one, saying a structure represents something means that it bears some relation to that thing. There exists no authoritative pretheoretic definition of what this relationship is that can be used to say that the relationship can’t be based on information. The information relation falls out quite naturally as we try to explain why an organism behaves the way it does with respect to the world it is in. There are things in the brain that tell other parts of the brain what is happening in the world. Depending on how you use the terms, biorepresentations may not exhibit full-blown ‘reference’ or ‘intentionality’ or ‘aboutness,’ but the word ‘representation’ isn’t arbitrary, or a horrible mangling of sensible use of the term.
There is a second, closely related, sense in which the term hooks up to the sexy, philosophical usages. Namely, representations are internal states that figure in the explanation of animal behavior. They are the ‘Internal maps by means of which we steer.’ Obviously they aren’t maps in the traditional sense, but more abstractly they are things that indicate what is happening in the world and that are reliable enough to use to make decisions about how to adaptively interact with that world. The neuronal populations that decide which muscles to contract don’t have direct access to the world, but must make due with other neurons that are tightly linked to what is happening in the world. They are like a person stuck with a map trying to find their way around in Boston.
2. What about thermostats?
My definition may considered by some to be too broad. If applied to the bimetallic strips in thermostats, even they would be said to represent temperature, as they are used by downstream mechanisms to control the behavior of an air conditioner and heating system.
Speaking as a neuroscientist, I would simply leave it to the philosophers to determine whether my notion of representation is useful or general enough for whatever purposes they are pursuing. For my admittedly local purpose of explaining the behavior of terrestrial organisms, I find this word and idea very useful for thinking of experiments and interpreting data. Clearly it points to something real, something biologically relevant, and fairly general within the sphere of explaining behavior.
Speaking as a philosopher…well, plenty of that below.
3. Potential limitations
Clearly the definition isn’t sufficient to account for all the interesting semantic properties philosophers of language and mind have described.
For one, it can’t handle the apparent existence of coextensional terms with different meanings (e.g., if something represents a square, it also represents a four-sided polygon).
Second, it isn’t clear that it can give explanatory bite to the actual content of a representation, what the structure carries information about. E.g., we might be able to give an account of leech behavior in purely mechanistic terms, leaving out the notion of representation altogether. On the other hand, my hunch is this would just serve to reduce the above notion of representation, not really eliminate it–after all, even in the mechanistic description we will discover that there are certain neurons that carry information about touch location and that these neurons guide behavior with respect to touch location.
Third, it is hard to see how this notion could be used to give an account of more abstract representations like ‘totalitarian’ or ‘love.’ On the other hand, to the extent that these things have extensions, then neuronal populations could form maps of political space or interpersonal space and be used to guide behavior with respect to these spaces.
Fourth, some might say it is not possible to account for misrepresentation using this notion. I commented on this when discussing the leech. The representation’s role (I leave ambiguous whether I mean functional or causal role–I don’t think it really matters) is to bring into the nervous system information about what is happening on the surface of the body. Unfortunately sometimes they fail to carry out this role, just as a heart can fail to carry out its role of pumping blood properly. Obviously, I steal this from Millikan.
4. Is representation dirty water, or the baby?
Some have gone so far as to say that any account of mind that employs information (or indicator, or correlation or covariation) is killed by the problems discussed above. In this spirit, Sellars mocked what he called the ‘thermometer view of concepts.’
Dretske and a few others have seen it differently. Sure, biorepresentations aren’t sufficient to account for all the semantic properties philosophers like. But that doesn’t mean they aren’t necessary. Dretske’s project is essentially to figure out what needs to be added to the basic biorepresentational core before we end up with full-blown minds with all their weird properties.
I can’t pretend to see the future, especially the future of philosophy of mind, but it seems neuroscience in the past 50 years has provided a very useful set of tools and concrete examples with which to think about representation in biological systems, about how brains hook up with the world, and tools for thinking about how accurate (or not) sensory representations are when mapping the world. From my narrow perspective within sensory neuroscience, I have always thought this approach was the most promising basis upon which to build a naturalistic philosophy of mind. Not the top-floor theory, but the basement level concepts.
How will such higher-level theory look? How are humans different from leeches? Not in the fact that we don’t have these basic sensory representations (because we do!), but in the fact that we have additional mechanisms in place to exploit this ancient biorepresentational core. We can store them in memory (as can bees), compare them to each other, activate them somewhat arbitrarily even in the absence of the proximal stimulus, combine them into novel, more complicated representational structures, etc.. This is largely what Dretske considers in Knowledge and the Flow of Information, this is why learning is central to his theory of full-blown semantic systems, why he takes pains to distinguish analog from digital representations. These are all additional conceptual resources that do not apply to thermostats, incidentally.
So while the leech isn’t a good model for human thinking, it may be a good model for what gives human thinking a little bit of traction in the real world, a good model for the biological processes that make human thinking possible.