Ramsey Reconsiders Representation

William Ramsey, Representation Reconsidered, CUP, 2007.

This is the most up to date and systematic discussion of representation that I know of.  It distinguishes four notions of representation and discusses their pros and cons, carefully and often insightfully.  That’s enough to make it a useful book.

Needless to say, there is more to it.

The four notions of representation are:  (A) representation as interpreted input-output decomposition (as when a complex system is decomposed into various subsystems that perform different stages of an information processing task), (B ) representation as model (e.g., a map), (C) representation as functional indication (i.e., representation a la Dretske, where something represents X iff it is recruited by the system because it carries information about X), and (D) tacit representation (without representation vehicles).  I don’t have room to elaborate on each notion, so if you are interested in more details, read the book.

Ramsey argues that cognitive science is moving in an anti-representationalist direction.  His argument is roughly as follows:

1. (A) i.e. representation as I-O decomposition is legitimate
2. (B ) i.e. representation as model is legitimate
3. (C) i.e. representation as functional indication is not genuine representation
4. (D) i.e. tacit representation (without representation vehicles) is not genuine representation
5. Classicism relies on (A) and (B ) (and hence it is a legitimately representational theory)
6. Connectionism relies on (C) and (D)
7. Connectionism is right
8. Connectionism is not a representational theory (from 3, 4, and 6)
9. Folk psychology is committed to mental representations
10. Eliminative materialism (i.e., the elimination of folk psychology via eliminating mental representations) is true (from 7, 8, and 9).


I agree with premise 4, but every other premise is questionable.  Contra premises 1 and 2, I don’t think that (A) and (B ) are robust enough notions of representation for cognitive science to rely on without shoring them up with something like (C).  Contra premise 3, (C) is not only legitimate representation, but something like it is probably our best bet to ground a full theory of representation.  Contra 5, classicism can rely on (C) as well as (A) and (B ) (this doesn’t affect Ramsey’s argument).  Contra 6, connectionism can rely on (A) and (B ) as well as (C) (as Rick Grush also points out in his review).  So connectionism is a representational theory after all, and it doesn’t threaten folk psychology, or at least not too much. 

(Incidentally, there are people–such as Bob Gordon–who object to premise 9; Ramsey doesn’t do justice to their concerns, but on this he is in the mainstream.) 

My main pet peeve is premise 7.  Whether connectionism is right, and even whether there is a contrast between classicism and connectionism, depends on what one means by ‘classicism’ and ‘connectionism’.  I think Ramsey is too vague on this point (like most people who write on this topic) for his argument to work.  For instance, if ‘connectionism’ means that cognition is explained by the activities of neurons and how they are connected together, then no one in their right mind should deny that connectionism is right.  But this says nothing against classicism (let alone representation). 

Classicism is best understood as a specific theory about how neurons and their connections are organized, and the kind of capacities that neural organization gives rise to.  With classicism thus understood, there is no immediate contrast between classicism and connectionism.  Classicism is just (more or less) another version of connectionism!  Of course, there are non-classical versions of connectionism, but then the contrast between the two theories should be drawn more carefully, and no conclusive case has been made that non-classical connectionism (as opposed to classical connectionism) is correct.  It’s all a matter of figuring out how neurons and their connections are in fact organized, and what capacities their organization does in fact gives rise to.  In the end, I do think that a version of non-classical connectionism (or, at least, non-classical in some important ways) is correct.  But more work needs to be done to show that this is the case.  (For more on these issues, see my recent paper in the journal Neural Networks, “Some Neural Networks Computer, Others Don’t”.)

In conclusion, even if Ramsey’s argument is unsound, his book is valuable in many ways.  For one thing, it will force representationalists to be more careful in defending their view.  For another thing, it is a good update of Stich’s 1983 anti-representationalist book (From Folk Psychology to Cognitive Science).  Like they did with its predecessor, representationalists can use Representation Reconsidered as a whipping boy for years to come.

27 Comments

  1. Thanks for the summary.

    I see two big problems.

    First, premise 3, especially since the notion of ‘representation’ we use in neuroscience is roughly indicator-focused. I guess he’d argue sensory neuroscientists use the term differently, that he has a stronger notion of representation. In which case, indicators are important for representation, just not the type of representation he is talking about (err, eliminating).

    Also, connectionism can rely on B, C, or both (a map can be looked at as a particular type of indicator, so I’d say that B often implies C).

  2. gualtiero

    Eric, I agree with you. Ramsey does say that the neuroscientists use the “indicator-focused” notion of representatio, but he also argues that that’s not genuine representation. He says there is no difference between that notion of representation and a “mere causal mediator”. He gives an elaborate argument, going through Dretske’s account of representation based on indication. I don’t think he is going to convince many people. (For starters, he didn’t convince me.)  But it will take some work to rebut his argument.  This is a good topic for a paper, but I don’t have time to write it at the moment, so someone else will have to do it.

  3. Eric Thomson

    Interesting stuff, perhaps I’ll check it out. Rather than say it isn’t “genuine” representation perhaps he should say it is a notion that doesn’t capture certain features he’s interested in. I have no trouble saying a representation is a “causal mediator”: it all depends on the nature of the causal mediation going on! After all, it seems to me a representation is (partly) a causal mediator of certain types of animal behavior!

    Perhaps when I have time I should kill two birds with one stone: argue against a pure map-based (i.e., Churchlandian) theory of representation, argue that it needs to be supplemented by indicator-considerations. In arguing for that it would likely be easy to deal with Ramsey (assuming I could, perhaps he has an argument that would convince me indicators aren’t necessary for representations).

  4. Ramsey’s argument, as I understand it, is not that indicators aren’t necessary for representations: it’s that indication or causal mediation alone is not sufficient for representation. Whether he’s right about this is different matter, but if he is right, questioning his notion of representation won’t help. If we extend the notion of representation so that mere causal mediation or relay switch type cases count as representational, isn’t the worry that we’d end up with pan-representationalism?

  5. Eric Thomson

    Zoe said, summarizing Ramsey:
    it’s that indication or causal mediation alone is not sufficient for representation

    This is also Dretske’s position. He doesn’t want thermometers to have semantic content!

    <i>If we extend the notion of representation so that mere causal mediation or relay switch type cases count as representational, isn’t the worry
    that we’d end up with pan-representationalism?</i>

    But nobody thinks any old causal mediation is sufficient to be a representation, either. I agree that the notion of causal mediator is so broad as to add little to no constraints (e.g., a fulcrum is a causal mediator between two forces). But this generality also makes it strange to say that representations are not causal mediators. There is good reason to think there are internal states that (function to) carry information about the world, and it is in virtue of the information carried that these states have the causal role they do in the behavior of the organism. The internal maps by means of which we steer.

    I think the honeybee is a good model system for the study of biorepresentation.  Ramsey doesn’t happen to talk about those does he? If someone could convince me honeybees don’t represent where the nectar is, as they fly back before doing their little waggle dance to communicate to their buds where that nectar is, I would be impressed!

    What sounds strange about Ramsey is that he uses connectionism (which is theoretical psychology, not neuroscience) to argue that science is veering away from representations, while all the while sensory neuroscience is replete with talk of representations!

    I may have to get this book just to go Dretske on him.

  6. Eric Thomson

    Zoe said, summarizing Ramsey:
    it’s that indication or causal mediation alone is not sufficient for representation

    This is also Dretske’s position. He doesn’t want thermometers to have semantic content!

    If we extend the notion of representation so that mere causal mediation or relay switch type cases count as representational, isn’t the worry
    that we’d end up with pan-representationalism?

    But nobody thinks any old causal mediation is sufficient to be a representation, either. I agree that the notion of causal mediator is so broad as to add little to no constraints (e.g., a fulcrum is a causal mediator between two forces). But this generality also makes it strange to say that representations are not causal mediators. There is good reason to think there are internal states that (function to) carry information about the world, and it is in virtue of the information carried that these states have the causal role they do in the behavior of the organism. The internal maps by means of which we steer.

    I think the honeybee is a good model system for the study of biorepresentation.  Ramsey doesn’t happen to talk about those does he? If someone could convince me honeybees don’t represent where the nectar is, as they fly back before doing their little waggle dance to communicate to their buds where that nectar is, I would be impressed!

    What sounds strange about Ramsey is that he uses connectionism (which is theoretical psychology, not neuroscience) to argue that science is veering away from representations, while all the while sensory neuroscience is replete with talk of representations!

    I may have to get this book just to go Dretske on him.

  7. Eric wrote:
    There is good reason to think there are internal states that (function to) carry information about the world, and it is in virtue of the information carried that these states have the causal role they do in the behavior of the organism.
    This is where Ramsey disagrees: Ramsey thinks that many of the structures treated by Dretske as representations can be employed in such a way that the causal relations enabling the structure to carry information can be explanatorily relevant while the information resulting from these relations is not. I think this is what he might say about the honeybees, but I’d have to give it further thought – he doesn’t mention that case.

    What sounds strange about Ramsey is that he uses connectionism (which is theoretical psychology, not neuroscience) to argue that science is veering away from representations, while all the while sensory neuroscience is replete with talk of representations!
    Ramsey’s argument against the neuroscientist’s notion of representation is largely against the receptor or indicator view discussed above, but he also argues against the notion of tacit representation found in connectionism. I haven’t finished reading that chapter though, so can’t comment!

  8. In thinking about representations in neuro-cognitive systems, I find it useful to distinguish between token representations and analog representations. Token representations carry information but are not decomposable. Analog representations carry information and are decomposable into constituent features.

  9. Eric Thomson

    Zoe:

    OK, I see. That is an interesting point.

    One potential tactic in response is to point out that representational vehicles (such as brain states) are clearly not epiphenomenal, and be agnostic about whether indicator-based representational contents (fly-here-now) are
    epiphenomenal. And I’d note that, in practice, we can’t identify the former without studying the latter. Indeed, I hate to say it, but I might say representational contents are “explanatorily prior” to representational vehicles. Yuck.

    A second (not incompatible) stance, from Dretske, is to give a positive account of the explanatory bite of representational content (indicator-based content). He argues that certain information-bearing structures are connected to certain behavior-causers in virtue of the information being carried by the information-bearer. If the information carried by the structure had been different, it would not be connected to the behavior-causing subsystemthe way it is. This is in the context of reinforcement learning. I’m not sure it completely works, though, but it’s pretty clever on Dretske’s part (Explaining Behavior and his article Reasons and Causes). My hunch is that it works for distal, but not proximal, explanations of behavior. Indeed, twin-earth type considerations make me think that isn’t too crazy to be a proximal-epiphenomenalist about indicator-content, but to reject distal-epiphenomenalism.

    Does Ramsey say that the neuroscientists are using the notion of
    representation incorrectly, or just differently? The first is probably wrong, the second would be nice. (I ask, hoping  that he isn’t in that ‘Conceptual Foundations of Neuroscience’ cult that thinks they need to detoxify the linguistic practices of neuroscientists–I’d be shocked if he were one of those weirdos, given that he studied with the Churchlands!).

  10. Eric Thomson

    This is very similar to Dretske’s distinction between ‘analog’ and ‘digital’ representations. I don’t like the co-opting of terms from signal processing in this context, as I think it confuses things. And ‘token’, that has a whole unique meaning in philosophy too. Ack.

  11. It seems to me that the co-opting of terms across domains of thought is natural, unless we want to create neologisms for each new concept. Should Dretske not use the signal-processing terms ‘analog’ and ‘digital’? Has philosophy egregiously co-opted ‘token’ from the time-honored ‘token of one’s esteem’?

  12. Eric Thomson

    I try not to be pedantic about this kind of thing, but I think Dretske shouldn’t have used the terms in particular since his book is about information theory, in which the terms have very different usages than his own!

    The bigger problem is that this still doesn’t specify how something comes to be a representation, whether it be analog or digital or token or whatever. That seems to be the fundamental issue Ramsey brings up (with the added nontrivial claim that to count as a representation, the content must not be epiphenomenal).

    As in my previous post, ‘representation’ is ambiguous: it can refer to contents or to vehicles (the thing being represented versus the thing doing the representing). It seems one can be epiphenomenal while the other isn’t, even though in practice we use both to fully study representational structures.

  13. It seems to me that if we are trying to understand cognition, and if we claim that a representation is a biological event, then the representational ‘contents’ and the occurrent state of the ‘vehicle’/brain mechanism are the same. The external object or event that is represented by the brain mechanism is necessarily outside of the ‘vehicle’.

    As for epiphenomena, in the moon illusion, is the moon an epiphenomenon? Is the difference in the perceived/represented size of the moon at the horizon vs. its perceived size at zenith an epiphenomenon? Is this relevant to the philosophical concept of epiphenomena?

  14. Eric Thomson

    <i>The external object or event that is represented by the brain mechanism is necessarily outside of the ‘vehicle’.</i>

    I agree. This is what I am calling the representational content of that representational vehicle.

  15. I’m reading now Metzinger’s ‘Being No One’, and he’s arguing to the point that there isn’t any important difference between the notions of representation in classical and connectionist camps. The books is a little too complex to summarize it with a couple of sentences but generally, he argues that these camps simply focus on different component levels of cognitive systems (so it could have been a mechanism-based point). Anyway, he defines representation notions in a way that doesn’t beg the question against any position by definitional fiat. I’ll probably post some summary as soon as I’m finished with reading of the monster 😉

  16. Bill Ramsey

    People may have moved on, but I recently found out about this thread and wanted to say a couple of things. First, thanks to Gualtiero for his review and for others for your interesting comments. I was hoping the book would prompt this sort of discussion. One point of clarification: Gualtiero suggests that I endorse connectionism and so, given my claims about the non-representational nature of connectionism, also eliminativism. But I don’t actually endorse any particular architecture or theory — instead, I think these are empirical questions and the jury is still out. So my endorsement of eliminativism is conditional — if connectionism models of the sort I discuss are correct, then that would entail eliminativism. But some may use this conditional to tollens against that sort of connectionism, since eliminativism seems so crazy. (By the way, in the final chapter, I try to spell-out a truncated version of eliminativism that might not sound quite so crazy.)

    Second, I think it is important that we distinguish questions about the way a representation’s content is determined from questions about how something actually functions as a representation. I think philosophers have tended to focus too much on the former and not enough on the latter. In fact, I think some have supposed that answering the former sort of question is sufficient for answering the latter sort (the issue has been somewhat muddled by teleo-semantic accounts). In the book, I focus more on the second question because I think it is more pressing for cognitive scientists. So I’m looking at the way researchers seem to think certain brain or computational states and structures function as representations, and I’m saying that some of these ways seem to work and some of them don’t. What many of you have trouble with is my claim that the Dretske-style indicator notion of representation is in the category of notions that don’t work. Given the widespread popularity of this way of thinking about representation, I’m not surprised that this is something people want to take issue with. Please do!!! But I suspect you will find it tougher going than you might think. In chapters 4 and 6 of the book I try to address many of the points made here, like Dretske’s attempts to supplement mere indication by appealing to structuring causes or the different way people appeal to “information carrying” to justify talk of representation. When you look really closely at what these things amount to, I think it becomes clear that they don’t do the work people think they do in justifying talk of representation. Although people have read the book as being anti-representational, I’m actually trying to defend representationalism from a sort of creeping deflationism whereby, as Zoe pointed out, virtually everything winds up being a representational system.

    Anyhow, have a look at my arguments and see what you think.

  17. Bill Ramsey

    Eric: This is an older 2003 paper called “Are Receptors Representations?” in a special issue of the Journal of Experimental & Theoretical Artificial Intelligence, 15, 2 (125-141). But the 4th chapter of the book is more carefully worked out and better.

  18. Eric Thomson

    Thanks–indeed, I printed that chapter out at the library earlier today! I also wrote up my own personal reasons for liking indicators in a recent post, which is implicitly a response to that chapter (e.g., your firing pin problem and your problem with using the word ‘representation’ to talk about these simple information carriers in nervous systems–my response to the firing pin would be along the lines of my response to the thermostat).

    The firing pin example is excellent. I agree with you that Dretske’s response (that you mention in the footnote) wasn’t satisfying. I think he could have had a better response.

  19. Bill Ramsey

    Eric,

    I’ll take a look at your thermostat discussion — where is this recent post, exactly?

    You might also take a look at the first two sections of my 6th chapter as well. There I offer a direct comparison with the indicator and map-based notions using a robotic sort of example. Also, I go after some other indicator defenders like Bechtel and Clark who defend a representational perspective from the dynamic systems folks.

Comments are closed.

Back to Top