C, R, Gallistel and A. P. King, Memory and the Computational Brain: Why Cognitive Science Will Transform Neuroscience, Wiley-Blackwell, 2009.
This is a rich and thought-provoking book. I cannot do it justice in a brief post so I apologize in advance for that.
Roughly speaking, the book argues that (1) many cognitive functions (such as dead reckoning of the type observed in ants and bees) require a read/write memory, and (2) neuroscience/connectionism has no room for such a read/write memory, and (3) such a read/write memory is probably stored in molecules such as RNA (as opposed to the connection strengths between neurons that are usually taken to be the memory mechanism by neuroscientists), and therefore (4) neuroscientists ought to embrace a digital computationalist theory of cognition and look for the right kind of memory mechanism in the nervous system.
Some of the thoughts the book provokes are healthy and invigorating. It challenges theorists of cognition to think hard about (i) which computing mechanisms are required for certain cognitive functions and (ii) how they can be realized in the nervous system. The book points out that there is a lot that we don’t understand about how the brain fulfills various cognitive functions, and this is true. The book points out that some connectionists and neuroscientists are somewhat narrow minded about the kinds of processes and mechanisms they accept as possible explanations of cognitive functions and might benefit by thinking outside the box. The book also includes a fairly clear introduction to foundational topics such as Shannon’s information theory, computation, and representation. This is all good.
But there are also plenty of problems. For one thing, the main argument of the book is unsound.
I agree with (1) (though I don’t buy Gallistel and King’s stronger claim that the read/write memory must be of a digital type like that of digital computers).
But (2) is just wrong. Strangely enough, Gallistel and King themselves point out that a read/write memory might be realized either as a “reverberating” neural activity (working memory) or in the connection strengths between neurons in a network (long term memory, cf. p. 285). But while they reluctantly admit that you can build a read/write memory out of accepted neural mechanisms, they offer some (unconvincing) reasons why that’s not good enough and proceed to act as if those possibilities don’t count. They act as though their argument goes through anyway.
Gallistel and King give no empirical evidence for (3). What’s worse is that they show no awareness that (3) is an old theory that was experimentally tested in the 1950s or so, and was eventually abandoned for lack of evidence. Before trying to resurrect a molecular theory of memory, Gallistel and King should at least discuss the available evidence.
As to (4), again it’s something that’s been tried before: computer scientists determining that the brain must work like a digital computer and telling neuroscientists to look for the digital computing mechanisms that _must_ be there. The oldest example that I know of is an old lecture by Allen Newell to an audience of neuroscientists, in which Newell asked “where in the nervous system I can find symbols” (Gerard and Duyff 1963, p. 343). (Newell is one of the founders of classical, or “symbolic” AI. Needless to say, neuroscientist have not found the kind of symbols that Newell was asking about.) While Newell’s hubris might have been understandable in the early ’60s, it’s a lot less understanable after 50 more years of detailed investigations of the nervous system.
By way of supporting (4), Gallistel and King draw an analogy with genes and DNA. Before Watson and Crick figured out the structure of DNA, no one knew how genes are physically realized, but that was not a good reason to reject genes. By the same token, a molecular read/write memory may well be needed for cognition, and should not be dismissed just because we don’t know how it’s physically realized. But the analogy breaks down at a crucial place. Before Watson and Crick (and collaborators) figured out how to investigate the structure of DNA, such structure was beyond the observational power of available experimental techniques. Not so for the nervous system. The basic structure of the nervous system was figured out a century ago, and more and more details about its working have been progressively figured out. Before Watson and Crick, genes could be treated like black boxes because no one knew how to study their molecular structure. But to suggest that today nervous systems should be treated like black boxes is neither here nor there.
Neuroscience has produced a huge amount of empirical evidence over the last decades. Any serious attempt at a theory of cognition must take this evidence seriously. Unfortunately, Gallistel and King do not. They dismiss neuroscience as associationist, antirepresentationalist, and unable to provide mechanisms with the right amount of computational power. Much of Gallistel and King’s argument attacks a straw man by calling him names. In fact, most cognitive neuroscientists are representationalists, and it is not at all clear that the mechanisms countenanced by neuroscience are insufficiently powerful to explain the cognitive functions discussed in this book. And if they are, it is extremely unlikely that the right mechanisms will look anything like those suggested by Gallistel and King.
A final (minor) example of the authors’ cavalier attitude. Gallistel and King cite one of my papers (among others) in support of the claim that “The neural network architecture lacks [an addressable read/write memory] because neuroscientists have yet to discover a plausible basis for such a mechanism. That lack makes the problem of variable binding an unsolved problem in neural network computation” (p. 153). But if anything, my paper entails the opposite. For as I point out in it, digital computers (which have an addressable read/write memory and can do variable binding just fine) are just one very specialized kind of neural networks. (Gallistel and King often draw a contrast between digital computers and neural networks, but they never define “neural network”.)
In conclusion, this is a stimulating book, which in many ways goes in the right direction–that of developing a detailed mechanistic theory of cognition. But the way forward will require handling neuroscientific evidence and discussing issues of computational architectures more carefully and rigorously than Gallistel and King have done. Of course, working out in greater detail how Gallistel and King’s arguments break down would be a very valuable exercise. I’ll leave that to Alex Morgan, ok Alex?