Reinterpreting Neurocomputational Models

Commentary from Mark Sprevak on today’s post from Mazviita Chirimuuta on The Brain Abstracted (MIT Press).

I very much agree with the book’s main claim that abstraction and simplification are essential theoretical virtues in the cognitive and brain sciences. Often a simple model – one good enough for our specific (often local) purposes in science – is preferable to a model that attempts to map out every detail of a target domain. Chapter 4 explores how this general objective – abstraction and simplification – is pursued via the specific strategy of providing a computational model of the brain. The chapter argues that computational models are one, particularly helpful, way to simplify the complexity of brain function. This idea seems fundamentally correct to me and it has had an important influence on the way I think about computation. I have learned a great deal about it from the book.

Chapter 4 connects the idea that computation is a means of abstraction/simplification to a second and separate idea. This second idea is that computational models provide a mere analogy for brain/cognitive function. This is something that I would like to resist. I wish to argue that one can perfectly well accept the first idea (computations often idealise/simplify/abstract) but reject the second (computations are mere analogies). Put more carefully, I wish to show there need not be any tension between a computational model functioning as an idealisation/simplification/abstraction and asserting that the target system is literally computing. Indeed, these two claims often – and in my view, almost invariably – go together.

Let us look at two worked examples to see this point. In order not to put the thumb on the scale, the examples are deliberately chosen from outside the contested domain of neuroscience.

First, consider an electronic pocket calculator. A pocket calculator is a simple, paradigmatic case of a physical system that computes. It is hard to imagine a physical device that more clearly can be said to literally compute its output. However, the computational formalisms we normally ascribe to it are idealisations/simplifications/ abstractions. We might, for example, ascribe to it computation of the addition function (as Marr does in his classic cash-register example). However, this computational model, while useful and often epistemically relevant, is strictly speaking false – it is an idealisation or simplification. The addition function maps any pair of numbers x, y  to x + y, whereas an electronic calculator maps only a small finite set of pairs of numbers (those that can be expressed with a certain number of digits) to their addend. We idealise/simplify the actual behaviour of the actual calculator when we ascribe to it computation of the addition function.

A second point to note about this example is that when we ascribe a computational formalism to a calculator, we do not aim to pick out an inherent property of the physical system. We do not describe a pattern that holds independent of its environment or the conditions under which it is used. A computational description may not apply if the calculator were placed on the surface of the Sun, or inside a vat of sea water, or inside an active MRI scanner. Only under certain conditions – typically carefully controlled and that intersect with typical human endeavours that involve us using the calculator in specific ways – is the device described with a specific computational formalism.

Let us move on to the second example. It is common for computer scientists and software engineers to ascribe human-readable algorithms to electronic computers. Such algorithms may take the form of a set of steps in a high-level programming language, logical expressions, loops, sets of conditionals, or iterative relationships between variables. The exact form that a human-readable algorithm takes does not matter for our purposes. What matters is that ascribing such formalisms is normally also an idealisation or simplification. Strictly speaking, electronic computers do not run these ascribed human-readable algorithms. They run instead the output of that human-readable code being fed into an optimizing compiler. (Actually, what they run is even more complex, but this suffices to make the point). ‘Optimization’ here is a wide-reaching term that includes wholesale reorganisation and rewriting of the source code in ways that are complex, unpredictable, and targeted to exploit idiosyncrasies of the underlying hardware (including its support and energy-management structures). Examples of changes that are likely to be introduced by an optimizing compiler include rewriting of conditional logic, reordering of steps, unrolling loops, replacement of inference with look-up tables, parallelisation, and introduction of new operations to speculatively perform functions that might or might not prove useful later. The results are frequently unfathomable: effectively incomprehensible without prior knowledge of the human coder’s intent.

Despite the gap between what the machine actually does and our commonly ascribed computational formalism for it we persist with ascribing simple human-readable computational formalisms to the target system. We don’t treat the human-readable computational formalism as a ‘mere analogy’ for the messy, more complex set of steps that actually runs on the physical system. We treat the human-readable formalism as a literal description at the right level of idealisation/simplification/abstraction.

The purpose of these two examples – the calculator and the optimizing compiler – is to illustrate that being the subject of an idealised computational model and literally computing are common bedfellows. Even for the most central, paradigmatic cases of literal computation – and what literally computes if not an electronic calculator or an electronic computer? – one might, and one commonly does, offer a computational model that aims to idealise, simplify, and abstract. There should be no surprise therefore about seeing this pattern replayed in another domain in which computational formalisms are used to model a physical system. There should be nothing unusual, for example, in seeing it in a computational model of brain function. In other words, evidence that computational models of neural function are idealisations/simplifications/abstractions is to be expected. It is in no way evidence that brains do not literally compute.

4 Comments

  1. Mazviita

    Nice comparisons to consider here. Mark points out that computational formalisms are normally idealised representations of physical systems, even in the most paradigmatic cases of computing machines. By itself, the fact that the computational formalism describes a physical system in an idealised way does not imply that the system is not literally computing. I agree, but the case for the analogical implication does depend on this. When I discuss analogies in science I want us always to consider disanalogies. There is a significant one here. We know that computing machines literally compute because they are artefacts made by people such that their causal structure will correspond as close enough to the idealised computational formalism for those devices to usefully capture the input-output relationships described in the formalism. With a biological system like the brain, what makes us think that evolution has operated like a computer engineer? I argue that scientists employing the computational framework have employed an analogy between inputs to a computer and either sensory systems as a whole or neuronal dendrites, between outputs and motor systems or neuronal axons, because certain assumptions of the computational framework (medium independence, separation of levels) licensed cognitive neuroscientists in abstracting away from the biological details of neural tissue. There is the simplification that Mark discusses – characteristic of all computational formalisms vis a vis physical systems – and the one that I’m saying is particularly significant in the research strategy of cognitive neuroscience: of treating a biological organ as if it were a man-made device in order to filter out biological details. Recognition that this is a simplification, I believe, gives strong evidential weight to the analogical interpretation of neurocomputational models.

  2. Good points, thanks Mazviita!

    “computing machines literally compute because they are artefacts made by people such that their causal structure will correspond as close enough to the idealised computational formalism for those devices to usefully capture the input-output relationships described in the formalism. With a biological system like the brain, what makes us think that evolution has operated like a computer engineer?”

    This is a good worry to press. How about someone who responds by asking us to consider the computational problems faced by the brain (or any intelligent organism)? Challenges stemming from basic need to survive, maintain integrity, reproduce, etc. give rise to a host of relatively well defined and relatively well explored computational problems. Why not say that in order for a natural system to succeed, survive, etc. there will be evolutionary pressures would produce structures (like the brain) that are tuned to solve those problems? (in other words, to literally compute).

    In this way, the case might seem similar to what is going on in the human engineering case (e.g. when we build a machine to solve a computation problem). It is just that in this case is no intentional agent doing the designing and testing of the system — instead, evolutionary pressures are tuning the (complex, messy) hardware to compute.

  3. Mohan Matthen

    Mark writes that it is an idealization to say that a pocket calculator computes the addition function; for while this function takes any pair of numbers to its sum, the calculator only takes a finite set of pairs to its “addend.” I am not sure that this is the right way to look at it.

    I would say, instead, that the calculator has what we might call an “interpretation” function, INT–I put the term in quote marks because this kind of interpretation is not subjective. INT is defined over physical states of the calculator, taking these to numerical expressions in the language of elementary arithmetic. The calculator itself is a device that physically transitions from INT() to INT z where z is a non-composite numerical expression and z = x + y. (I hope I haven’t missed something essential in this way of laying out the formalism.)

    Now, whatever ‘idealization’ means, the term should not prevent us from thinking that calculators literally compute. For as Mazviita writes in her comment above, they are built to incorporate addition under INT. And the fact that they only take a finite set of pairs as arguments doesn’t change this–a good calculator gives the correct value of the PLUS function for any arguments it is offered.

    Should we think of brains differently? For under suitable INT functions, they too compute various functions (including PLUS). Mazviita suggests that we should regard them as fundamentally different because we can’t appeal to a contructor to provide us with the “intended” INT. But it’s a big leap from saying that THIS way of defining INT is unavailable to saying that there is no interpretation function, and a larger leap to concluding that it’s only an analogy, or even a “simplification,” to say that it computes.

    Mark suggests a teleosemantics for INT: that it is defined by biological function. I am sympathetic and have urged this for decades. But teleosemantics is a decoding strategy. It’s a way of going about “reading” the algorithm being computed. It isn’t a reductive substitute for INT itself. So, while I agree with Mazviita that we shouldn’t think that “evolution has operated like a computer engineer,” I am still inclined to think that evolution has produced a computer. Of course, this begs Mazviita’s argument in chapter 4, but until we see that, I am not even inclined to treat the brain as a “simplification” of a computer.

  4. Mazviita

    Hi Mark and Mohan. A couple of thoughts relevant to this discussion occurred to me when I was writing earlier on Carrie’s thread. In the book I specify that the argument is about the comparison between the brain and digital computers. Digital computers are the only ones that most people are familiar with (hence they guide our intuitions) and that’s what all current neuro-computational models are run on. Digital computers are designed such that we can treat hardware and software independently. This amounts to there being an independence of function from material structure. In general, whatever you say about evolution as a designer, it does not care for structure/function separation. They tend, instead, to be highly interdependent in biology. This is the crucial disanalogy between brains and digital machines. But this is also what makes the analogy with digital computers appealing as a simplification because it gives you a lens to view the brain in which function is analysed independently of structural details. This abstraction away from structural details is no different from how simulations and models are used in other branches of natural science: a weather simulation is taken to be only a simulation because it does not reproduce the structural (material) properties of that system, but only a set of functional dependencies that happen to afford accurate and efficient prediction of the dynamics. But the literal interpretation of neuro-computational models tells us that with the brain, unlike any other natural, material object, there is such a high degree of structure-function separation that a model that only represents a few functional relationships can capture what the brain is essentially – it can reproduce, not merely simulate, brain operations. This strikes me as implausible. However, my position does not rule out there being a much closer analogy between brains and analogical computers, because for these machines structure and function are more connected, but not completely inter-dependent (as Haugeland in Analog-Analog nicely explains). Anyway, to sum up, the case against the literal interpretation is based on specific disanalogies that occur between brains and digital computers which may not occur with more exotic kinds of computers.

Comments are closed.

Back to Top