The received view in the philosophy of cognitive science is that cognition is (largely explained by) a kind of digital computation, and computation requires representation. Therefore, if neurocognitive systems are computational, they manipulate digital representations of some sort. According to this received view, representations are unobservable entities posited by successful psychological theories of cognitive capacities. That representations exist and explain cognition is a conclusion reached by inference to the best explanation. Some philosophers of an autonomist bent have gone so far as to conclude that representations and computations are proprietary to psychological explanation and do not belong in neuroscientific explanation. While I agree that neurocognitive processes are computations that manipulate representations, I argue against the other tenets of this received view.
First, I argue that physical computation does not require representation. Computation is well-defined without representation and can occur whether or not representation is involved.
Second, as Daniel Kraemer first pointed out to me, the notion of representation was first introduced as an explanatory posit by neuroscientists in the 19th century—not by cognitive scientists in the 1950s as many believe.
Third, neuroscientists did initially posit representations as unobserved entities that explain cognition, at a time when neuroscience was in its relative infancy. Since the 19th century, however, neuroscientists have developed myriad techniques to confirm that neural representations exist, to observe and measure representations, and to manipulate representations in the lab. Therefore, neural representations are observable and manipulable entities, and a fortiori they are real. (My initial argument for this conclusion was originally developed in collaboration with experimental neuroscientist Eric Thomson.)
Fourth, neural representations are structural representations—that is, they are systems of internal states that covary with external targets, have a causal connections with their targets, can be tokened in the absence of their targets, and can guide behavior.
Fifth, nothing in the definition of structural representations requires them to be digital, so structural representations as well as computations defined over structural representations need not be digital.
Sixth, as a matter of empirical fact, neural representations and the computations defined over them are not digital, at least in the general case. The main reason is that neural computations are defined either over the frequency or over the (relatively exact) timing of neuronal spikes, which are not features of digital signals. (There may be cases in which neural representations and computations are digital or approximately digital; whether there are such cases is an interesting empirical question.)
Seventh, in an important technical sense, neural representations and the computations defined over them are not analog either, because—unlike analog signals—neural signals do consist primarily of (mostly) all-or-none spikes.
(Side note: Corey Maley has an interesting research program, independent of our joint work, where he argues that neurocognitive systems are analog computing systems. He and I don’t agree on every detail of the story but, given the way he defines analog computation, I agree with his main conclusion.)
Therefore, neural computation is sui generis. (My initial argument for this conclusion was initially developed in collaboration with biophysicist Sonya Bahar.)
An important consequence of this argument is that cognitive scientists cannot just posit whatever computations and representations seem most explanatory without worrying about neurocognitive mechanisms. They need to take into account what is known about neural computation and representation. Psychology and neuroscience mutually constrain one another and must be integrated to provide multilevel mechanistic explanations that involve neural computations over neural representations.
You agree that functional analyses are actually sketches of neural mechanisms and so they are neither distinct nor autonomous from neural mechanistic explanations. This seems to entail a negation of Multiple Realizability… because if neural mechanisms of different kinds of organisms (humans, octopi, aliens, robots) vary, it means functional explanations will also vary. I am fine with it.
But what is the nature or ontology of representation – in linguisitc/abstract propositional form or imagistic/pictorial/sensory-motor-affective mannered form? Where is this representation in the brain – in specific regions or scattered? What is the nature of the so-called computational process — is it just a metaphorical way of
expressing formally/mathematically for something happening biologically or electro-chemically in the brain, or it is something else when called as a “digital” computation…? What is digital in computation– yes (1) or no (0) firings, and if it is so, are there no strong firings or weak firings in the neurons…?
Thanks for your questions.
On multiple realizability, I argue that cognitive states are multiply realizable. What that means is that the same higher-level state is an aspect of different lower-level realizing mechanisms.
In the book I don’t get into whether representations are linguistic or imagistic. I’m not sure how helpful those analogies are. Surely there are partial isomorphisms between neural sensory representations and external stimuli, whether visual, auditory, etc.
Representations are usually “scattered” through many neurons (distributed). There is a lot more to say about this but I’m not the best person to say it.
Neural computations are transformations of neural signals (mostly representational) in which the rules followed by the system depend on multiply realizable aspects of the signals, such as frequencies and timing of the signals (and the information they carry, which is also multiply realizable).
As my post says I argue that, at least in the general case, neural computation is not digital. It is sui generis.
What is expected from neural processes to be when they are called up as “digital”? I understand you don’t believe them to be digital.
Why to impose the terminology of computer science on what is a biological neural phenomenon?
So in your view, neural ‘processes’ (i avoid calling them ‘computations’ — is this okay with you…?) are neither digital nor analog, but are sui generis …
But It doesn’t answer what THEY ARE.
What is their ONTOLOGY?
If neural computation is a transformation of neural signals (depending on their frequencies of firing, etc), then they are transforming into what? If one is angry or in fear , what transformations in neural signals are happening, and how are they different corresponding to these two different mental states?
You claim that representations are
real structures. But you are giving no explanation of the nature of representation. This is a not trivial question and can’t be left unanswered.
You want to argue that computations are possible without representations. How is it possible, esp. when you also claim representations to be real…?
Thanks for your follow up questions. I cannot do justice to your questions without copying and pasting chapters of my book here. These blog posts are brief summaries; the details are in the book so you might want to check it out. Briefly, there is an important analogy between neural processes and what computers do; that’s why we call them both computations. There are mathematical theories of neural computations that describes them in detail; such theories are different from the theories of digital and analog computations. Computation is well defined without representation, but typical neural computations operate on representations.