Biorepresentations: necessary but not sufficient?

What follows is a working definition of how some people in neuroscience use the term ‘representation,’ and a brief examination of that usage.

Definition of biorepresentation:
A biorepresentation is an internal structure whose role is to carry information about what is happening in the world, a structure that is used by downstream processes to guide behavior with respect to those bits of the world that it carries information about.

Two examples
1. In the leech there is a simple behavior known as the local bend response (LBR). Touch the leech at a medium pressure, and it will bend its body away from where it was touched. The LBR seems to be a very simple local escape mechanism. There are sensory neurons (P cells, where P stands for ‘pressure) in the leech nervous system that carry information about touch location. The P cells synapse onto a downstream network of neurons that “read out” the signals in the P cells, and control the behavior in the appropriate way, i.e., to move the leech away from the location of touch.

Using the above definition, P cells represent (or biorepresent: I will often leave off the prefix) touch location –they carry information about touch location, and downstream neuronal processes use them to guide behavior with respect to touch location. (As a side-note: is it possible for the P cells to be “wrong” in some sense? Sure, if the leech isn’t touched, and they fire action potentials, then they are misrepresenting the state of the world at that time.)

2. The primary visual cortex (V1) of monkeys carries information about what is happening in the ambient electromagnetic world, about where and what various objects are, in addition to a zillion other things. There are all sorts of downstream neurons that make use of the activity in V1 to guide behavior with respect to those things in the world. Therefore, V1 represents those aspects of the world, based on the above definition.

1. Why use this word?

In what sense does this usage of the term ‘represent’ hook up to more esoteric senses as you might find in a philosophy of mind course?

For the record, I don’t think using a term in science needs defense: we use terms that are close to the meaning we want that already exists in ordinary language, and this helps us think about the phenomenon (we don’t think much about words or concepts, but are much more interested in the thing we are studying), and to communicate our theories to nonscientists. However, I’ll defend it nonetheless.

For one, saying a structure represents something means that it bears some relation to that thing. There exists no authoritative pretheoretic definition of what this relationship is that can be used to say that the relationship can’t be based on information. The information relation falls out quite naturally as we try to explain why an organism behaves the way it does with respect to the world it is in. There are things in the brain that tell other parts of the brain what is happening in the world. Depending on how you use the terms, biorepresentations may not exhibit full-blown ‘reference’ or ‘intentionality’ or ‘aboutness,’  but  the word ‘representation’ isn’t arbitrary, or a horrible mangling of sensible use of the term.

There is a second, closely related, sense in which the term hooks up to the sexy, philosophical usages. Namely, representations are internal states that figure in the explanation of animal behavior. They are the ‘Internal maps by means of which we steer.’ Obviously they aren’t maps in the traditional sense, but more abstractly they are things that indicate what is happening in the world and that are reliable enough to use to make decisions about how to adaptively interact with that world. The neuronal populations that decide which muscles to contract don’t have direct access to the world, but must make due with other neurons that are tightly linked to what is happening in the world. They are like a person stuck with a map trying to find their way around in Boston.

2. What about thermostats?
My definition may considered by some to be too broad. If applied to the bimetallic strips in thermostats, even they would be said to represent temperature, as they are used by downstream mechanisms to control the behavior of an air conditioner and heating system.

Speaking as a neuroscientist, I would simply leave it to the philosophers to determine whether my notion of representation is useful or general enough for whatever purposes they are pursuing. For my admittedly local purpose of explaining the behavior of terrestrial organisms, I find this word and idea very useful for thinking of experiments and interpreting data. Clearly it points to something real, something biologically relevant, and fairly general within the sphere of explaining behavior.

Speaking as a philosopher…well, plenty of that below.

3. Potential limitations
Clearly the definition isn’t sufficient to account for all the interesting semantic properties philosophers of language and mind have described.

For one, it can’t handle the apparent existence of coextensional terms with different meanings (e.g., if something represents a square, it also represents a four-sided polygon).

Second, it isn’t clear that it can give explanatory bite to the actual content of a representation, what the structure carries information about. E.g., we might be able to give an account of leech behavior in purely mechanistic terms, leaving out the notion of representation altogether. On the other hand, my hunch is this would just serve to reduce the above notion of representation, not really eliminate it–after all, even in the mechanistic description we will discover that there are certain neurons that carry information about touch location and that these neurons guide behavior with respect to touch location.

Third, it is hard to see how this notion could be used to give an account of more abstract representations like ‘totalitarian’ or ‘love.’ On the other hand, to the extent that these things have extensions, then neuronal populations could form maps of political space or interpersonal space and be used to guide behavior with respect to these spaces.

Fourth, some might say it is not possible to account for misrepresentation using this notion. I commented on this when discussing the leech. The representation’s role (I leave ambiguous whether I mean functional or causal role–I don’t think it really matters) is to bring into the nervous system information about what is happening on the surface of the body. Unfortunately sometimes they fail to carry out this role, just as a heart can fail to carry out its role of pumping blood properly. Obviously, I steal this from Millikan.

4. Is representation dirty water, or the baby?

Some have gone so far as to say that any account of mind that employs information (or indicator, or correlation or covariation) is killed by the problems discussed above. In this spirit, Sellars mocked what he called the ‘thermometer view of concepts.’

Dretske and a few others have seen it differently. Sure, biorepresentations aren’t sufficient to account for all the semantic properties philosophers like. But that doesn’t mean they aren’t necessary. Dretske’s project is essentially to figure out what needs to be added to the basic biorepresentational core before we end up with full-blown minds with all their weird properties.

I can’t pretend to see the future, especially the future of philosophy of mind, but it seems neuroscience in the past 50 years has provided a very useful set of tools and concrete examples with which to think about representation in biological systems, about how brains hook up with the world, and tools for thinking about how accurate (or not) sensory representations are when mapping the world. From my narrow perspective within sensory neuroscience, I have always thought this approach was the most promising basis upon which to build a naturalistic philosophy of mind. Not the top-floor theory, but the basement level concepts.

How will such higher-level theory look? How are humans different from leeches? Not in the fact that we don’t have these basic sensory representations (because we do!), but in the fact that we have additional mechanisms in place to exploit this ancient biorepresentational core. We can store them in memory (as can bees), compare them to each other, activate them somewhat arbitrarily even in the absence of the proximal stimulus, combine them into novel, more complicated representational structures, etc.. This is largely what Dretske considers in Knowledge and the Flow of Information, this is why learning is central to his theory of full-blown semantic systems, why he takes pains to distinguish analog from digital representations. These are all additional conceptual resources that do not apply to thermostats, incidentally.

So while the leech isn’t a good model for human thinking, it may be a good model for what gives human thinking a little bit of traction in the real world, a good model for the biological processes that make human thinking possible.


  1. I’m just a lurker without much philosophical training, so excuse me if this is something trivial. I find it problematic to equate the receptors or higher-level neurons, that is, the neural structure with representations. Wouldn’t it be the activity of some specific neurons rather than just the neurons structurally what represents?

  2. Eric Thomson

    Wolf: interesting point. I look at ‘the activity of cells’ as the representer, not either in isolation, though frankly I don’t think much rides on this.

    But your point does bring up another potential problem with my view, one I left out last night:
    I addressed thermostats, now what about plants? There I can’t say ‘Leave it to the philosophers.’ Even they have biorepresentations using my definition, things responsible for guiding phototaxis (well, this is if you allow that phototaxis is behavior).

    I don’t know whether to limit my analysis to neurons or not, as I frankly don’t have expertise outside of neuroscience and don’t know how botanists talk about things, and it’s nobody’s place to legislate language outside of his specialty. As I said, qua neuroscientist I am happy to use this simple notion to help me understand the behavior of terrestrial organisms (with nervous systems), and let others decide if it is useful enough to help them pursue the goals they are pursuing.

    Stepping back, I frankly don’t mind if internal plant or bacterial states biorepresent. Indeed, singe-celled organisms can have fairly complicated behaviors, and even ion channel populations control certain behaviors. Do the ion channels represent external sodium concentration in these buggers? Sure, why not? I have nothing against that.

    For the record, I also have nothing against saying the bimetallic strip in the thermostat represents temperature I’m not sure about the firing pin in a gun–does it represent pressure on the trigger? In present day guns it does not as carrying information isn’t its role. But let’s say I build a trigger based thermostat–when I pull the trigger the AC goes on for 10 minutes–I put the firing pin in to transmit information about the trigger to the AC unit. In that case, the firing pin is a representation.

    These artificial examples are weird as the functional role (in this case it is clearly functional role we are talking about) depends on us, so we decide whether something is a representation. That’s one reason I avoided those cases because I don’t want to end up in a long discussion about how functions are fixed.

    Again, I really don’t want to press on these examples, as my definition is limited to the range discussed in the piece (implicitly terrestrial organisms with nervous systems).

  3. anna-mari

    Moi Eric,

    How are you? Anyway, what an interesting issue…

    Just two brief comments:

    You wrote:

    “…For one, saying a structure represents something means that it bears some relation to that thing. There exists no authoritative pretheoretic definition of what this relationship is that can be used to say that the relationship can’t be based on information. The information relation falls out quite naturally as we try to explain why an organism behaves the way it does with respect to the world it is in.

    Depending on how you use the terms, biorepresentations may not exhibit full-blown ‘reference’ or ‘intentionality’ or ‘aboutness,’ but the word ‘reprsentation’ isn’t arbitrary or a horrible mangling of sensible use of the term”

    – I have no difficulties whatsoever to accept many of your wise words, and it seems to me you are saying that we just have to use the notion of “representation” as a filler term when we are explaining the behavior. However, let me emphasize – a filler term or not, but still I do agree that usually the term is not arbitrary (in neuroscientific theories), but a well-justified part of those theories.

    But, there is something I have been thinking about. Sometimes it seems to me that there certain differences when a term (or a notion) of “representation” is used as a specification of explanandum or when it is used as a specification of explanans. Would you agree? I don´t know, whether or not this makes any sense, but I´d say when a notion of representation is used as a filler term to something-to-be-explained, then people usually try to say something about “reference”, “the relationship between the brains and the world” etc. And when the notion of representation is used as a part of explanation of, say, neurocognitive phenomena, then the notion plays different roles… depending on the explanatory purposes.

    (I tried to write something about those purposes and roles here, but it is more or less confusing… so I deleted it.)

    Greetings from sunny and warm Helsinki,


  4. Eric Thomson

    Anna–I’m not saying it is a filler term, something to be explained at some future date. It is typically something we measure and observe.

    But it is a good point that it is often used such a way, especially by psychologists. Or perhaps I’d say they are theoretical posits. Less of that goes on in neuroscience.

Comments are closed.