Have I got this right that the principal reasons people have had for rejecting representations is the stuff from dynamical systems theory and mobile robotics? DS don’t have to have representations and neither do mobile robots.
But, why not say that if they don’t have representations, they aren’t thinking either? In what sense are non-representational beings thinking?
And what do they have to say about single-cell recordings that find individual neurons sensitive to things like oriented lines or actions (mirror neurons)? Don’t those neurons represent?
Great questions.
On reasons for anti-representationalism, I think you are right that those are the main reasons invoked in the cogsci literature. Outside that literature, there are also Wittgensteinians, postmodernists, and other anti-naturalists (e.g., Rorty) who think that the idea of mental representation leads to terrible problems.
On neurons’ response properties, I haven’t seen any discussion by anti-representationalists, and I’d be very curious to know whether they have anything to say.
I was thinking about the cog sci crowd here.
Hi, Ken. If the question is ‘why don’t people who reject MRs on the basis of the DST/robotics argument reject neural representation too?’, here are two possible answers:
1) They do, arguing perhaps that mere co-variation isn’t sufficient for representation. This has more appeal than you might think. Bill Ramsey, I believe, argues that the notion of representation in neuroscience is different from the one used in classical cognitive science. He’s not a general anti-representationalist, but he thinks that the notion gets thrown around too liberally and needs to be reined in. I think there is something to this.
2) They don’t, admitting that sensory systems contain representations (at the transduction level at least) but cognition doesn’t. That’s only mildly concessive. Some of these folks (of the embedded cognition stripe) prefer to say not that there are no representations, but only that they’re overemphasized in explanation. More attention should be paid to, e.g., coupled organism-world relations than to internal ‘models’.
Other sorts of anti-representationalists apart from those Gualtiero mentions are, I guess, folks like Searle (anti-computationalist) and Davidson (interpretivist), who aren’t at all Wittgensteinian or postmodernist. Another possibility is to be anti-MR because of doubts about the prospects for naturalizing intentionality (this isn’t the same as Rorty’s anti-naturalism). Of course, there are other reasons that aren’t part of any particular doctrine (MRs lead to skepticism about the external world, they lead to a regress of interpretation, etc.).
I don’t think mobile robots think; at any rate, not yet. But you raise an interesting challenge: are representations necessary for thinking? Some people seem to view this connection as not just an empirical hypothesis, but as (practially) analytic. If we hastily identify the phenomenon of intentionality with the phenomenon of representation, we could be led this way.
But perhaps that’s a mistake. (Surely RTM isn’t so near to being a tautology.) Then the issue is just (!) how intentionality and representation can be teased apart.
Although I am not an anti-representationalist myself, I find the reference to single-cell recordings unconvincing.
What the evidence shows is that single neurons fire when some entity (e.g., a ling in some parts of the visual field) are present in the environment of the organism. What we have here is a causal relation between the presence of an entity in the environment and an event.
To make the claim that the neuron (or the firing of the neuron?) represents the presence of the entity in the environment, another step is needed.
Now, the issue is what justified moving from a description of the relation neuron-entity in causal terms to a description in representational terms.
As far as I can see, nothing in the data justified this step. What jusifies this step is your independent commitment to some theory of representation.
Additionally, anti-representationalism might result from a distrust of semantic properties. This distrust might have various origins. One of them is the fact that semantic theories massively rely on semantic intuitions (Machery et al. 2004, Semantics cross cultural-style, cognition).
I’ve always found representation talk to be too clean for my taste — too discrete, too precise and determinate in its contents, to be the literal truth about the messy world of cognition.
Daniel,
Bear in mind that these single-cell recording techniques find co-variant cells for lots of things. I’ve heard that there are cells in macaques that fire in response to the hose on a fire extinguisher! There are also the mirror neurons. They are not just found in photoreceptors, hair cells, etc.
Ken
Ken, some anti-representationalism that is also within the cog sci crowd comes from phenomenological considerations of skilled action (this, I think is related to but separate from DST-type arguments). I’m thinking here of peeps like Dreyfus and Noe.
For some anti-anti-representationalism along these lines see Grush and Mandik 2002 “Representational Parts” and Mandik 2005 “Action Oriented Representation”.
In neuroscience, being informative about (or covarying with, or predicting, or being caused by) X is not typically taken to be sufficient to be a representation of X. The structure must also be used to guide behavior wrt X (as Dretske says, it’s an internal map by means of which we steer: just having a map isn’t enough).
I have a recent (empirical) paper on touch location representation in the leech. Here is the pubmed link.
Note that internal information-bearing states that govern behavior are called representations in sensory neurosci, but this minimal core doesn’t get you the ability to individuate coextensive representations, among other things (e.g., why are V4 neurons representing things in the world rather than in the retina?). Dretske has done the most, that I’ve seen, to overcome these problems starting with the neuroscientist’s representational core, ie to build up from these representations to truly cognitive structures (chapters 4-7 of Knowledge and the Flow, and his great article Misrepresentation).
Daniel, Edouard, and Eric
I would agree that co-variance is not sufficient for representation. (What more was a central issue in the Dretske and Fodor sort of naturalized semantics of the 1990’s.)
So, there are two questions, let’s say.
1) Are the neurons (or their firings) representations?
2) Are the neurons (or their firings) representations in virtue of their co-variation in the environment?
We all agree that the answer to the 2) question is no.
What about the 1) question? I take it that anti-representationalists are committed to saying no to this one. Ok. Why?
Of course, there is the question in virtue of what, if not covariance, are neurons representations. Here is, perhaps, some additional “data.”
Suppose we had the following kind of data. Suppose that, say, an oriented line detector cell fires even when there is no oriented line of its preferred orientation present in its visual field. That is, there is no oriented line that falls anywhere within the tuning curve of that cell. But, this cells fires because of horizontal connections it has to other cells that are stimulated in adjacent regions. Maybe this is the kind of thing that happens with the apparent luminance boundaries in phenomena such as the Kaniza square. So, a given cell that represents X is not caused by X. There is no X (no line of any of the preferred orientations in the visual field). This looks to be misrepresentation, which was (in the 1990’s) thought to be important for making information into meaning. What about this?
Ken
In the interest of not simply asking questions, I might add this blurb from something I’ve written:
Why hypothesize that cognitive processes involve representations? Within cognitive psychology, the principal reason has long been that representations are posits that help explain how it is that cognitive agents are not merely stimulus driven.[1] Rodney Brooks’s robot, Herbert, is a fine example of a stimulus driven device.[2] It can roam the halls of MIT searching for aluminum cans resetting its internal states every three seconds. What it does at any moment is determined by its fixed internal structure and what its sensors currently tell it. By contrast, normal humans appear to be able to record information that informs future actions. Humans remember names, dates, and places that influence their future behavior. Many non-human animals remember such things as where they have stored food caches and what traps look like.
——-
1. Beer, (2003), proposes that cognitive states, rather than representations, would suffice to explain how it is that cognitive agents are not merely stimulus driven. We agree. In truth, we likely need to appeal to so-called “systematic” relations in cognition to motivate the hypothesis that cognition involves combinatorial properties. (Some of which we allude to below.) Combinatorial properties, however, favor the hypothesis of representations, which can be combinatorial, over the hypothesis of mere cognitive states, which are non-combinatorial. See Aizawa, (2003), for much more discussion of this kind of argumentation. A full explanation and defense of the need for (combinatorial) representations would take us too far from our principal concerns. There is, however, neuroscientific evidence for representations to which we allude below.
2. Brooks, (1999).
As someone who’s been participating in the representation debate for the last few years, my experience is that there are three general types of motivations, and they both tend to work together:
1.) The dynamic systems approach, which didn’t come from nowhere. It’s consistent with some approaches to neuroscience and connectionism in particular. I for one think the dynamic systems objections to representations have been pretty thoroughly answered by Art Markman, Eric Dietrich, and William Bechtel (I’d be happy to give references, but you can probably find them on your own if you’re not already aware of them). Sometimes, I can’t help but feeling that the dynamic systems people, when confronted with the most difficult problem in the study of cognition (knowledge representation), decide to replace it with something equally obscure — dynamic systems models.
2.) Embodied cognition. The embodied cognition folks tend to have very muddled ideas about what cognition is, in my experience, and some of them argue for representations of various sorts (perceptual symbol systems, e.g.), while others have taken to rejecting representation altogether. It’s often hard to tell what the hell they’re saying, though.
3.) Phenomenological considerations. Dreyfus is, of course, the main name here. He’s written a couple papers on the use of Merleau-Ponty and Husserl in cog sci, from an anti-representat perspective. Alva Noe has restricted his attack on representations primarily to vision, and I’m not sure what he says/would say about amodal representations.
I’ve always found Dreyfus’ arguments strange, because neither Husserl nor Merleau-Ponty seem to reject representations outright, as Dreyfus does, and Dreyfus’ anti-representation arguments tend to depend almost entirely on their work. M-P, for instance, is dealing primarily with a “pre-cognitive” stage, and when he talks about what comes “after” that, he refers to representations pretty frequently.
I’m surprised no-one mentioned Gibson as another, and older, source of anti-representational arguments. There is a long philosophical tradition of objection to the notion that all perception is mediated by representations (the ‘direct realist’ school) which is logically distinct from claiming that *all cognition* (e.g. thinking) occurs without representations. I think many “anti-representationalists” are trying to focus on the former, and indeed would concede the latter is unlikely to be true (e.g. Brooks) yet many “anti-anti-representationalists” insist on conflating them.
What´s about Jerry Lettvin´s feature detectors cells (1959); Kornosky´s pontificial cells or grandmother cells; Charles Gross´ inferotemporal cortex cells tune to specific characteritics of the face; David Perrett´s STS cells “representing” aspects of intentionality either in the body (e.g. hands, torso, legs…) or the face (e.g. eyes, mouth); Kanwisher´s fusiform face area and other functional regions of interest (e.g. place perception area and body perception area)
are all of this cells or ensemblage of cells “representing” the world or not?
https://www.message_lidronlioutr.com/