Today, an email exchange with Michael Rescorla made me wonder about the received view among philosophers of cognitive science.
In the 1980s, the received view used to be something like the language of thought hypothesis, or at least some symbols-and-rules framework of which the language of thought hypothesis was the most prominent representative.
Since the 1990s, it seems to me that most philosophers of cognitive science are more ready to assume some variant of connectionism. So now connectionism is the received view.
Does this seem right?
Caveat 1: Much could be said about what “connectionism” means, or doesn’t mean, or should mean, but I’ll leave that aside for today.
Caveat 2: Undoubtedly, even today many philosophers remain sympathetic to symbols-and-rules, while many others reject computations, or representations, or both, altogether.
I’d disagree that the received view is connectionism, but to explain this I’m going to have to ignore caveat number 1 a bit. When I think of connectionism I think of late ’80’s “wow, look at what you can do with a massively connected feedforward network”-style connectionism whereby there were supposed to be either no representations or only massively distributed hidden-layer representations. Nowadays when I look at the work of people who care about neural networks (either real or artificial) in philosophy–people like me, Rick Grush, Chris Eliasmith, Michael Anderson, to name just a few–the models appealed to don’t have the same massively distributed holistic character.
And that’s just the friends of neurons I’m talking about. There are still plenty of people in phil of cog sci that don’t see what the big hoopla about neural networks is supposed to be about. Connectionism aside, I don’t even see that there’s a received view with a particularly brainy flavor, but maybe I’ve been on the east coast too long.
Huge complicated topic. Bear with me as I start out with realistic neural models.
In neuroscience, the recieved view was supplied by Hodgkin and Huxely (HH) in 1952. They have provided the quantitative framework in which neuroscientists think about how brains work, and when we talk about “biologically realistic” models this is what we are referring to. If you ask a group of computational neuroscientists what the greatest success story of their field is, probably 9.6/10 will say the HH model of the neuron (I could go on about this, but I assume everyone is familiar with these models). The working assumption of systems neuroscience is that we will ultimately explain behavior in terms of such models (and of course we can include the world, muscles, body or whatever for all the externalists out there, but they will be glued together by neurons).
Moving up in the heirarchy of abstraction, we have the old-school connectionist type models, which have been of almost no use in neuroscience, but are useful speculative psychological models (as they stressed in the 1986 volumes).
These two levels can be connected in principled ways. For instance, Bard Ermentrout proved that a network of HH neurons reduces to connectionist nets when certain assumptions are made about the HH networks (1994: Reduction of conductance-based models with slow synapses to neural nets).
At a level even higher, we have the rules and representations (RR) folks, who are also creating speculative psychological models. I think Sompolinsky has made the most serious attempt to connect this RR-level with the connectionist level. His recent two-volume book The Harmonic Mind goes over this in great detail. What impresses me about that treatise is that he really takes seriously the theories of modern (Chomskian) linguistics and connectionism, and tries to show how RRs can emerge from his models. (He, however, is clear to say that linguistics isn’t smoothly reducing to the ANN theory).
Getting back to the question, it is pretty clear there are (at least) two paradigms. Connectionism survived its rebirth, and there is a still-growing set of modellers working within its framework. The RR view certainly hasn’t died, but Fodor’s old argument from LOT explicitly relied on the premise “There is no alternative to the RR view”, but this is no longer true. We have at least one other view.
My impression is that people just got sick of hearing abstract and inconclusive arguments from the two sides (the Fodor/Churchland show) and don’t quite know what to make of it. I think this is a good thing: we don’t know what to make of it because we don’t have enough data to know which side (or sides) to believe. In time, we’ll see how much of each of the three above levels of abstraction will be required to explain human behavior. The insights won’t come from philosophy as much as dirty, difficult, and decades-long empirical work.
I said Sompolinsky above, but meant Smolensky, when discussing the book The Harmonic Mind.
Have I got this right that the principal reasons people have had for rejecting representations is the stuff from dynamical systems theory and mobile robotics? DST don’t have to have representations and neither do mobile robots.
But, why not say that if they don’t have representations, they aren’t thinking either? In what sense are non-representational beings thinking?
And what do they have to say about single -cell recordings that find individual neurons sensitive to things like oriented lines or actions (mirror neurons)? Don’t those neurons represent?
Daniel and Edouard
I would agree that co-variance of is not sufficient for
representation. (What more was a central issue in the Dretske and
Fodor sort of naturalized semantics of the 1990’s.)
So, there are two
questions, let’s say.
A) Are the neurons (or their firings) representations?
Are the neurons (or their firings) representations in virtue of their co-variation in the environment?
We all agree that the answer to the question is no.
What about the A) question? I take it that anti-representationalists are committed to saying no to this one. Ok. Why?
Suppose we had the following kind of data. Suppose that, say, an oriented line detector cell fires even when there is no oriented line of its preferred orientation present in its visual field. That is, there is no oriented line that falls anywhere within the tuning curve of that cell. But, this cells fires because of horizontal connections it has to other cells that are stimulated in adjacent regions. Maybe this is the kind of thing that happens with the apparent luminance boundaries in phenomena such as the Kaniza square. So, a given cell that represents X is not caused by X. There is no X (no line of any of the preferred orientations in the visual field). This looks to be misrepresentation, which was (in the 1990’s) thought to be important for making information into meaning. What about this?
Ken
In the interests of not simply asking questions, I might add this blurb from something I’ve written:
Why hypothesize that cognitive processes involve
representations? Within cognitive
psychology, the principal reason has long been that representations are posits
that help explain how it is that cognitive agents are not merely stimulus
driven.[1] Rodney Brooks’s robot, Herbert, is a fine
example of a stimulus driven device.[2] It can roam the halls of MIT searching for
aluminum cans resetting its internal states every three seconds. What it does at any moment is determined by
its fixed internal structure and what its sensors currently tell it. By contrast, normal humans appear to be able
to record information that informs future actions. Humans remember names, dates, and places that
influence their future behavior. Many
non-human animals remember such things as where they have stored food caches
and what traps look like.
[1] Beer,
(2003), proposes that cognitive states, rather than representations, would
suffice to explain how it is that cognitive agents are not merely stimulus
driven. We agree. In truth, we likely need to appeal to
so-called “systematic” relations in cognition to motivate the hypothesis that
cognition involves combinatorial properties.
(Some of which we allude to below.)
Combinatorial properties, however, favor the hypothesis of
representations, which can be combinatorial, over the hypothesis of mere
cognitive states, which are non-combinatorial.
See Aizawa, (2003), for much more discussion of this kind of
argumentation. A full explanation and
defense of the need for (combinatorial) representations would take us too far
from our principal concerns. There is,
however, neuroscientific evidence for representations to which we allude below.
[2] Brooks,
(1999).
Thankyou for sharing a nice article.to me cognitive science as rigorous philosophy.I also have done extensive work in cognitive science, in both academic and industry research,I’ve been exposed to a lot of different things,and have a substantial knowledge base.