Corey Maley: Comments on The Idealized Mind

COMMENTS ON THE IDEALIZED MIND, BY MICHAEL D. KIRCHHOFF
Corey J. Maley Purdue University cjmaley@purdue.edu

Michael Kirchhoff’s book The Idealized Mind has many original and thought-provoking ideas, touching on a number of subjects relevant to contemporary discussions in the theory and philosophy of cognitive science. Here, I will focus my comments on Chapters 6 and 7, where Kirchhoff critiques views that characterize neural systems as performing computations in some literal sense.

Kirchhoff’s view is that computational explanations in neuroscience are idealizations. Coupled with his view about the nature of idealizations in cognitive science more generally, it turns out that the elements that theorists take to be definitive of computation (e.g., Egan’s notion of mathematical content, or Piccinini’s notion of rule-governed transitions) are features of the idealizations of the systems being investigated, rather than features of the systems themselves.

In these chapters (and others), Kirchhoff examines actual cases from computational neuroscience. This is helpful, and what one who takes the science seriously should probably do. However, what we should infer from cases in computational neuroscience is a particularly tricky topic, relative to other sciences, because computational neuroscientists themselves are engaged in two separate projects that are not always clearly distinct. On one hand, computational neuroscientists sometimes engage in computational model building. This is common in science: for any science X, there are people engaged in “computational X,” which just means that they study the phenomena of interest using computational models. On the other hand, computational neuroscientists also explicitly take neural systems to be computational systems—systems that literally perform computations. This is not what we find in, say, computational astrophysics: no astrophysicist claims that galaxies or black holes are engaged in computing anything.

A central task for philosophers of computational neuroscience (or philosophers of computation interested in neuroscience) is to characterize what, if anything, it is for a neural system to compute. Figuring that out likely depends on a theory of what, if anything, it is for any physical system to compute. The jury is still out on the best such theory, but all parties to the discussion that I am aware of take it as a given that there are physical systems that do, in fact, perform computations. Perhaps computation is not something that natural systems can ever engage in, but at the very least, we have a lot of artificial, or engineered, systems that engage in computation.

Assuming that we can come up with a theory of when a physical system computes, and assuming that we can apply it to neural systems, we can then assess whether we can take a given model in computational neuroscience as merely a model or a characterization of the actual computations performed by the system. Not every model in computational neuroscience is likely to be the kind of model that is apt for characterizing a computation that a neural system could literally perform, and we should probably not think that they are. Kirchhoff notes (p. 140) that some models in computational neuroscience (e.g., efficient coding models) are not compatible with certain accounts of computation (e.g., the mechanistic account of computation). True enough. But it does not follow that this means neural systems do not literally compute. In fact, even if neural systems do literally compute, they surely do other things, too—things that are not computational. Nevertheless, those other things can probably be modeled computationally; this is no different than the fact that in a digital computer, we can model heat transfer—notably not a computational process—using computational models. This could be quite useful when designing an actual computer that needs to not overheat. Similarly, the fact that some models in computational neuroscience are not amenable to being characterizations of literal computations should not be taken as evidence that neural systems do not compute at all.

A larger concern I have with Kirchhoff’s take on computation is that it seems to imply that there are no physical systems that could ever really compute anything. Worse, it seems to imply that much of the way we talk about physical systems more generally is false. If this is correct, it is unclear how to characterize or describe what physical systems themselves are doing at all: everything interesting is on the side of idealizations, but not on the side of the actual systems. Let me start with the first claim first.

Most views of physical computation take implementation to be the primary issue to be addressed. A computation is defined by an abstract, formal characterization, such as a Turing Machine or some other finite automaton. A physical system computes when it implements a given automaton (and what it computes is specified by that automaton). The difficult part is precisely theorizing what “implements” means. In the practice of computer engineering, this is not much of a concern: engineers stipulate how the states of physical

systems correspond to state of automata, and the physical system is designed such that transitions among physical states correspond to transitions among the states of automata. There is a very real sense in which computational characterizations such as automata are idealized with respect to the physical systems that implement them, which Kirchhoff notes. This is, in fact, part of the utility of computation: in the case of digital computers, these specifications can be made without any reference to physical properties at all. However, when discussing Chalmers’ theory of implementation and Piccinini’s appeal to rules (the detail don’t matter here), Kirchhoff states that the rules “sit on the side of the computational model, not the target system. It is an idealization to describe the nervous system as computing a mapping from inputs, possible internal states, to outputs in accordance with a rule… The computational framework is, therefore, a useful idealization in the sense that it enables us to build sophisticated models of the nervous system in the form of artificial neural networks and cellular automata. However, it is false of nervous systems if taken literally,” (p. 145).

The problem, if I understand the view, is that what Kirchhoff says about using a computational formalism to characterize a nervous system would also apply to characterizing any physical system, and if it is wrong to say that nervous systems compute when they can be characterized (in some idealized way) by an abstract automaton, it is also wrong to say that other physical systems compute when they can be characterized (in some idealized way) by an abstract automaton. It would seem that what goes for nervous systems goes for electronic circuits.

The conclusion that no physical systems ever really compute seems like a big bullet to bite, and maybe this is a worry that Kirchhoff can defuse. But there is a more general worry in this neighborhood, of which computation seems to be a special case. It may be that Kirchhoff can avoid this result, but I am not clear on exactly how.

The Hodgkin-Huxley (HH) equations are often taken to be a mathematical model of the action potential. They are uncontroversially idealizations: at the very least, voltage changes in actual neurons are the result of the flow of discrete ions, whereas the HH equations take voltage change to be continuous. Now, when the HH equations show that the voltage in the axon of a neuron will be, say, 45 mV at some time t, are we licensed, on Kirchhoff’s view, to conclude that the axon, itself, will be at 45 mV at t? I am not sure. Perhaps we are only licensed to say that we can conclude that the axon will be at approximately 45 mV. Or perhaps we should not take the claim that axons have voltages literally at all, because this is feature of the idealization. However, that would, at least, be at odds with the neuroscientific practice of measuring these voltages across axonal membranes.

This same point—or question, really—applies to even more basic cases. It is common to

construct models of electrical circuits with circuit diagrams, and analyze how voltages and currents will change, given the organization and components of the circuit. For example, in a parallel circuit, an elementary result is that the total current is the sum of the currents of each component; literally, for 𝑛 components: 𝐼𝑡𝑜𝑡𝑎𝑙 = 𝐼1 + 𝐼2 + … + 𝐼𝑛. This equation is an idealization, and summing is a mathematical operation on abstract quantities. Does it follow on Kirchhoff’s view that the total current, in the actual system, is not literally the sum of the component currents? It would seem so, but if so, then how do we characterize what is really going on in the physical system?

This point then generalizes. Nearly any measurement is an idealization, and any manipulation of a measurement is an idealized manipulation, happening on the side of the numbers, equations, or other abstract apparatuses we use to characterize physical systems. If we measure the height of a plant in centimeters at some point in time, and know its growth rate, we can calculate the plant’s height at some later point using some addition and multiplication. Did the plant literally add 𝑛 centimeters to its height, given that the measurements and calculations are all idealizations?

As I said, it’s possible that Kirchhoff has answers for these questions that I simply missed. His book is rich, and the nonliteral account of idealization that he proposes deserves careful consideration, and I may well have failed to consider his overall view with the attention it deserves. I look forward to hearing more about his response to the questions I pose here.

Ask a question about something you read in this post.

Back to Top