Frances Egan: Some Physical Systems (Literally) Compute

Some Physical Systems (Literally) Compute
Frances Egan 

Rutgers University 

In his ambitious new book, Michael Kirchhoff argues that computational models cannot be literally true of the real-world systems they purport to describe. In chapter 6 he aims his critique at my account of computational models (Egan 2018, 2020), but, as I understand it, the critique is intended to apply widely to any account of physical computation and has implications for scientific theorizing in general. In these brief remarks I argue that my account does not commit an elementary confusion and draw some general conclusions about computational modeling.  

Kirchhoff claims:  

  • … the mathematical theory of computation, as formulated by Egan, confuses the properties of the computational model with the properties of the target system the model is used to understand. A different way of putting this is: Egan succeeds in showing only that mathematical content ascription is literally true of the computational model, not the target system. (129-130)  

Let me briefly explain my notion of mathematical content and the work it does in my account. A computational model aims to provide an abstract specification of the causal organization underlying a cognitive capacity. For example, Marr’s (1982) theory of early vision purports to explain edge detection by positing the computation of the Laplacian of a Gaussian distribution of the retinal array. In effect, the mechanism computes a smoothing function to eliminate noise and facilitate the identification of edges in the scene. It takes as input intensity values at points in the image and calculates the rate of intensity change over the image. In computational accounts the relevant causal organization responsible for the cognitive capacity is characterized, in a compact and perspicuous way, by mapping physical states of the system – perhaps, populations of neurons firing – to the arguments and values of the function computed. In general, characterizing the system as computing a particular function supports a representational characterization: the input and output states of the system can be said to represent the arguments and values, respectively, of the computed function. I call these mathematical contents, but this representational characterization is a trivial consequence of characterizing the system as computing a particular function. The point of the function-theoretic characterization (hereafter FT), as I call it, is to characterize a real property of the system, in this case the causal organization responsible for the organism’s capacity to detect edges. To be sure, FT characterization involves considerable idealization and abstraction. Physical systems are subject to all sorts of noise, and any given physical system will have much more causal organization than that responsible for the cognitive capacity to be explained. Nonetheless, the computational model aims to give a literal description of the target system.  

The problem with my account, according to Kirchhoff, is that it confuses properties of the model with properties of the target domain; as he puts it, it “confuses the map with the territory.” (136)  

There is no such confusion. In general, explanatory models attribute properties to target systems. The model is true of the target just in case the target instantiates the properties that the model attributes to it. But there is always a question of which aspects of a model should be interpreted as attributing a property to the target, and what property the model thereby attributes to the target. Colors on a map may represent topographical features of the terrain or they may represent political boundaries. Alternatively, color may have a nonrepresentational function, perhaps just making the map easier to read. The general point is that, as with maps, one needs to be careful in attempting to read the empirical commitments of a model from its formulation in the theory that deploys it.  

Kirchhoff says that the challenge for the theorist is to meet the translation task, “the task of showing that facts about scientific models can be turned into true statements about the target system.” (120)  

He goes on to say,  

  • Unless one can specify which aspects of neural activity correspond to which aspects of a specific target system, then the fictionalist will always have the option of arguing: we do not know if [the model] is true or false. (133)  

But a specification of the relevant aspects of neural activity underlying the capacity to be explained is precisely what the mapping from physical states to the arguments and values of the function is intended to deliver. As I have said, the mapping specifies the causal organization responsible for the capacity (e.g. detecting edges in the scene, grasping objects in view, and so on). Whenever the system is in the physical state(s) that, under the mapping, realizes the argument(s) of the function it is caused to go into the physical state that realizes the value. A computational characterization is a mapping between two structures; a fully specified account must specify both structures – the abstract structure of the computation, given by the FT characterization, and the causal structure in the target system that realizes it. I cannot see what more the so-called translation task could require.  

But, as I’ve noted, the computational characterization is an idealization, and so, Kirchhoff claims, it must be interpreted nonliterally. The argument for idealization is based on two considerations:  

(1) FT characterizations abstract from contextual details:  

  • For Egan (2018), a function-theoretic characterization is an abstract, domain-general, and content-free characterization of the function-theoretic explanation of target systems in the world. As Egan states: “It prescinds not only from the cognitive capacity that is the explanatory target of the theory (vision, motor control, etc.) but also from the environment in which the capacity is normally exercised” (2018, 252). In this way, even the function-theoretic characterization is an idealization. (134)  

If the FT characterization is an idealization in virtue of prescinding from features of the target system’s (that is, the computational mechanism’s) normal context, then to theorize is itself to idealize. How can one theorize without prescinding from (some) aspects of context? This is necessary to allow generalization to other contexts, for example, to other locations and times.  

(2) Visual processes (neural processes in general) are noisy:  

  • … the function-theoretic characterization of edge detection in early vision in terms of computing the Laplacian of a Gaussian is an idealization. This is easy to see. In an ideal world, visual input is not noisy. Visual input would greet the visual system as smooth, regular, clearly marked, and with sufficient density of detail “so that gradients in an image can be measured quite precisely” (Marr 1982, 233). Applying the Laplacian to a Gaussian distribution allows one to create this kind of ideal world mathematically. If the function-theoretic characterization of edge detection in early vision in terms of computing the Laplacian of a Gaussian is treated literally, the account is false. By implication, therefore, the function-theoretic characterization of edge detection in early vision in terms of computing the Laplacian of a Gaussian must be understood nonliterally. (135-136)  

Theorizing does, typically, involve idealization, as Kirchhoff claims, but it doesn’t follow that theories must be understood nonliterally. That would be an unfortunate consequence for all scientific theorizing. But let’s focus just on computational models. A more plausible interpretation is that an FT characterization is only approximately true, subject to revision in light of further evidence concerning behavioral performance and available neural circuitry. There will still be idealization, but no reason to think that this requires a nonliteral interpretation of the theory.  

Let me conclude by noting a further implication of Kirchhoff’s account. An FT characterization subsumes both natural and artifactual systems; it is independent of the realizing medium, though, of course, the mapping from physical states to function computed will be very different in the two sorts of cases. If the fact that computational characterizations of neural processes are idealizations requires us to deny that brains literally compute, then, given that all physical systems are subject to noise, laptops and iPads do not literally compute either. There are no physical computers. Kirchhoff may be happy with this implication; I am inclined to think that his account proves too much.  

References  

Egan, F. (2018), “The Nature and Function of Content in Computational Models,” in M. Sprevak and M. Colombo (eds.), The Routledge Handbook of the Computational Mind, Routledge, 247–258.  

Egan, F. (2020), “A Deflationary Account of Mental Representation,” in What Are Mental Representations? in J. Smortchkova, K. Dołęga, and T Schlicht, Oxford University Press, 26–53.  

Kirchhoff, M. (2025), The Idealized Mind, MIT Press.  

Marr, D. (1982), Vision, Freeman. 

Ask a question about something you read in this post.

Back to Top