Interpreting Neurocomputational Models

Post 2 of 5 from Mazviita Chirimuuta on The Brain Abstracted (Open Access: MIT Press).

The idea in my project to foreground the question of interpretation is loosely inspired by work in the philosophy of physics which takes the empirical success of e.g. quantum field theory for granted but debates the further issue of what the theory may be said to be telling us about the natural world. The dominant view within neuroscience and philosophy of cognitive science is that where there are mature, predictively successful models describing a computation and accounting for brain data, there are grounds for saying that the brain actually performs those computations. I call this the literal interpretation. Chapter 4 of the book (Your Brain Is Like a Computer) makes the case for an alternative way of understanding the real-world import of neuro-computational models which I call the analogical interpretation. The basis of this interpretation comes from consideration of the fact that computational modelling is a simplifying strategy.

In Chapter 1 I outline three broad simplifying strategies that have played a central role in the development of both physical and biological sciences: mathematization, reduction, and the forming of analogies. Reduction seeks to explain the workings of a whole system through careful examination of the properties of its parts in isolation from the whole. Although many of the flagship successes of biology are based on reduction (e.g. the molecular revolution), this approach has its limitations due to the fact that biological parts tend to behave differently in the context of the whole organ or organism so that decontextualised study of cells and tissues in vitro is only partly informative.

Within cognitive neuroscience, a reductionist approach is one that studies the physiology of individual cell types or circuits with the expectation that this can shed light on the neural basis of cognitive capacities such as vision. This was precisely the approach criticised by David Marr at the start of Vision when he commented that we would never be able to explain flight by just learning more and more about feathers. He was a proponent of the computational approach to vision and computationalism needs to be understood as an alternative simplifying strategy. Building a computational model of a target system is obviously a form of mathematisation. Mathematisation simplifies its target because it necessitates selection of a subset of quantitative properties of the target to be represented mathematically, abstracting away from all qualitative properties and depicting only certain relationships between the quantitative properties. Mathematical representations of moving projectiles in Newtonian physics are a canonical example.

In the chapter (pp.93-96) I describe how physicist-turned-biologist, Nicolas Rashevsky, attempted a straightforward mathematization of neurophysiological processes such as the action potential in his work in the 1930’s, but this approach was soon overtaken by a hybrid of mathematisation with the strategy of analogy formation, made possible by the invention of digital computers. Borrowing from Mary Hesse’s account of the role of analogies in science, I discuss how the drawing of an analogy between a complex target of investigation and a relatively well understood system, such as a man-made device, enables the scientist to selectively attend to the properties and relationships that the two things have in common, while licencing an abstraction away from their unshared properties. This, I argue, is the role served by computer models of the brain. It is not only that brains, like, everything else in the natural world, can be subject to mathematisation by representing them with computational models; neuroscientists since McCulloch and Pitts (1943) showed that the brain is like a computer for it can be said to take inputs, subject them to mathematical operations, and produce outputs.

The literal interpretation of neurocomputational models asserts that the analogy between brain and computer has some metaphysical weight: the brain literally is performing the computations represented in the best neurocomputational models. This is how David Marr interpreted his models of retinal ganglion cells: the Laplacian of Gaussian was said to be the mathematical function that the neurons themselves computed, contributing to the capacity of edge detection in the early visual system.

I argue for a more cautious interpretative stance, emphasising that neurocomputational modelling rests on an analogy between brains and computers that is at best partial. We should not ignore the differences. For one thing, the clean division between inputs, outputs, and the computation in between, is not present in the nervous system itself, though the “classical sandwich model” of the computational theory of cognition, so-called by Susan Hurley, imposes its template from the machine, where it is apt, to the biological system. In the next post I discuss the philosophical implications of the analogical interpretation – why it suggests we should be cautious about the prospects for consciousness in non-living machines.

One comment

  1. Pingback: The Brains Blog

Comments are closed.

Back to Top