Author’s Reply to Frances Egan: The Proof is in the LoGs

The Idealized Mind (2025) argues that discussion about neural representation and neural computation is based on idealized models. This has serious implications for defending realism about neural representation and neural computation.  

Egan is right to think that my critique of computational models applies more widely than to her own account of computational models (2018, 2025). Indeed, chapter 7 is devoted to a critique of Piccinini’s (2020) mechanistic account of computation (see also my response to Maley’s commentary). In The Idealized Mind, I argue that Egan confuses the map with the territory, i.e., the computational model with the target system. In her commentary, Egan argues that I’m wrong – “There is no such confusion.” She states: “The model is true of the target just in case the target instantiates the properties of the model.” In her response, including in her own work, Egan refers to Marr’s theory of vision as an illustration of mathematical content in computational models. The key example is edge detection in early vision. The computational model is a LoG: a Laplacian of a Gaussian. Here is a case where we can be certain that the computational model isn’t true, because the target doesn’t instantiate the properties of a LoG. It is incredibly useful to use LoGs in vision neuroscience because LoGs highlight features without getting distracted by irrelevant speckles, akin to how our eyes pick out shapes in a foggy scene. We see the idealizations used in a LoG by examining the formal expression of a LoG:  

where I(x, y) is light intensity, G is a Gaussian smoothing kernel, and * is the convolution function. The first idealization is I(x, y), representing input to the visual system as a continuous, noise-free luminance function. The next idealization is the convolution function represented with *. The convolution function implies that a LoG is an analytical operator defined on continuous functions over an infinite domain. This is an obvious idealization, since neurons cannot perform infinite convolution integrals. The last idealization is the Laplacian squared, Ñ2, which allows for infinite precision by idealizing away stochasticity.  

I agree with Egan that simplification is necessary for scientific theorizing. I also think that Egan is right to say that the computational model may well be approximately true, which is why I argue that her computational account is consistent with scientific realism. However, one cannot hang onto any “realist construal of representational vehicles” (Egan 2020, p. 26) given the idealized nature of the computational model. Hence, realism must be rejected for not only cognitive content (i.e., mental representations) but also for mathematical content (i.e., computational vehicles).  

References 

Egan, Frances. 2025. Deflating Mental Representation. Cambridge, MA: The MIT Press.  

Egan, Frances. 2020. “A deflationary account of mental representation.” In What Are Mental Representations?, edited by Joulia Smortchkova, Krzysztof Dolega, and Tobias Schlicht, 26-53. New York: Oxford University Press.  

Egan, Frances. 2018. “The nature and function of content in computational models.” In The Routledge Handbook of the Computational Mind, edited by Mark Sprevak and Matteo Colombo, 247-258. London: Routledge.  

Kirchhoff, Michael. 2025. The Idealized Mind: From Model-based Science to Cognitive Science. Cambridge, MA: The MIT Press.  

Piccinini, Gualtiero. 2020. Neurocognitive Mechanisms: Explaining Biological Cognition. New York: Oxford University Press.  

8 Comments

  1. fegan7343f7e202

    Thanks Michael. With respect to a “realist construal of representational vehicles”: this issue is independent of content – cognitive and mathematical – and the status of what I call the function theoretic characterization (e.g. LoG). The vehicles are the states or structures (for example, populations of neurons) involved in the causal processes characterized by a fully articulated theory and to which content is assigned. They must be real or there is no explanation of the capacity.

    • Michael Kirchhoff

      Hi Frances

      That’s informative, thanks! Sorry for the slightly long response below.

      If computational theories of cognitive capacities provide what you call a ‘function-theoretic’ characterization of some cognitive capacity (e.g., edge detection via LoGs), and if LoGs are based on a series of idealizations (which they are), then the ‘function-theoretic’ characterization is based on a series of idealizations. This is the case for LoGs. It is also the case for many other, if not all, computational function. So what should one conclude? The mathematical content will be idealized (this follows given the highly idealized features of mathematics). The cognitive content is quarantined to an intentional gloss (akin to mental and neural fictionalism). And the ‘function-theoretic’ characterization is based on idealization – exemplified by a LoG.

      If all we’re saying is that there are populations of neurons that exists and do stuff, yet all the cognitive content, mathematical content and the function-theoretic characterization will involve idealization, then I can’t see how one can also claim that ‘the [computational] model is true of the target just in case the target instantiates the properties of the model’. I say this because: we may use a LoG to model edge detection; yet a LoG involves several idealizations for it to function; hence, we can’t also say that whatever does the work of detecting edges in the nervous system does so by computing a LoG unless we’re also willing to say that we don’t actually mean this in any literal sense.

      By way of analogy. One can use the ideal gas law to model the behavior of natural gases. The derivation base of the ideal gas law involves at least two idealizations (e.g., setting the value of intermolecular interaction to zero). But we don’t attribute zero intermolecular interaction to the natural gas. Similarly, one can use a LoG to model edge detection in the nervous system. A LoG is based on idealizations (e.g., convolution functions via kernel methods). If the analogy is to work here with the ideal gas law, then we should not attribute LoGs to the nervous system. Yet, this is what the computational model of edge detection ends up doing via the function-theoretic characterization, since it is this characterization that specifies the mathematical content of a computational model. My issue with computational theories of cognition is: unjustified slippage from specifying the computational model to claims about neurobiological activity. Piccinini (2020) ends up doing the same thing: attributing rule-governed state transitions to the nervous system, when, in fact, all the rules are on the side of the scientific model.

      Where does this leave us, I wonder? From my perspective, if we’re to hang onto the computational gloss on neural activity, regardless of thinking of such activity in causal, correlational or statistical terms, we end up having to upgrade neural activity with elements such activity do not implement (i.e., computational functions). The ‘positive’ result is that we get keep the language of computational neuroscience. The ‘negative’ (or in my view, correct) result is that we cannot defend any version of computational realism in neuroscience.

      I’d be keen to know if you’re willing to accept that even the mathematical content is idealized?

      Michael

  2. Frances Egan

    Michael – I wonder if your argument is intended to apply to artificial systems. Are brains especially noisy? Is mathematical characterization of artificial systems somehow less subject to problematic idealization? Do any physical systems literally compute?

  3. Michael Kirchhoff

    Hi Frances

    Some good questions! I’d be keen to know your thoughts on the comment above, if you get time. On to your questions 🙂

    My target isn’t AI systems. It could have been, since deep artificial neural networks (e.g) are based on numerous mathematical idealizations. Yet, I’m interested, as you may know, in how we use what is an artificial or mathematical system/model to say something about biology. I take most computational models to be in large part mathematical models just like a LoG is a mathematical model. The next modeling question is: how does the model relate to its target system? To answer this question one needs to initially work out precisely what the idealized assumptions are in the model. If the idealized assumptions are what we’re attributing to the target system (biological or artificial), then we have a theoretical problem.

    Are artificial systems somehow less subject to problematic idealizations? I don’t think so. But it is also besides the point for the discussion of interest here. We’re not asking how edge detection, say, in a physical computer is undertaken. We’re asking how edge detection is realized in biological systems. Or, so I assume. This is our target phenomenon. Our model is (on your account) a computational model given a function-theoretical characterization in terms of mathematical content. With this in place, my argument (as you know) is that none of this secures computational realism.

    The arguments provided in the book lead to the view that one can use computational modeling in the sciences of the mind and brain, even if one cannot defend entity realism about computation.

    Finally, do any physical system literally compute? That’s a big question. I’ll have something to say about this in my response to Corey Maley in tomorrow’s post. So I might leave you with a bit of a cliff-hanger!

  4. Frances Egan

    Michael: I am not asking about AI systems. I am interested in implications of your view for whether physical systems — laptops, iPhones, simple AND gates — literally compute. Since mathematical characterization is always an idealization, does it ever accurately describe the real world? Is it true to say that my hand calculator adds? I see that Corey raises a similar issue in his comments. I will take a close look at your reply to him.

    • Michael Kirchhoff

      Hi Frances

      Roger that. Let me then try to be more precise. Since you refer to addition, I will make use of your example from your excellent 2025 MIT book by way of an answer.

      I take the following to be the situation in discussions over computational implementation: a physical system computes some function only if there exists a mapping from physical states of the system to some target phenomenon. No issues here, I suspect.

      In Deflating Mental Representations, you exemplify this with ‘addition’. Specifically, physical states related causally (p1, p2 → p3) are mapped to numbers n (53) and m (47), and n+m (related as addends and sums). The idea is that the mapping is what specifies the causal organization of “the system underlying its capacity to produce behavior consistently interpretable as adding.” (Egan 2025, p. 52) Say the system adds 53 and 47, which equals 100. If p1 is interpreted under the mapping 53 and p2 is interpreted under the mapping 47, then the causal organization of the system “goes into the physical state interpreted under the mapping as 100.” (Egan 2025, p. 53) If this function holds up enough times, then this “allows us to say that the system computes addition.” (2025, p. 53)

      One thing that it is worth flagging here is the use of the qualifier ‘interpreted or interpretation’. What are we supposed to make of this? Either the system computes ‘addition’ or it does not. If the system only can be said to compute ‘addition’ under an interpretive lens, then this looks too weak to secure the metaphysical claim that the system actually computes ‘addition’.

      The calculator example is presented as having an interpretation function (INT). INT is defined over the physical states of the calculator. When we think of a calculator as computing 53+47=100, we are using the INT function to map the physical states of the calculator (arrangements of electrical circuits) to values and numbers. One can interpret a physical state under the mapping 53, even if the physical state itself is not a number. Again, how should we make sense of INT? To me this starts to look similar to our discussion above about being realist about vehicles and then adding on top of this an INT – e.g., in terms of cognitive content or in terms of mathematical content used in the function-theoretic characterization of a computational model.

      This makes me wonder: is your own account pushing in the same direction as The Idealized Mind? Corey Maley (in his commentary) seems to think this is a problem (of sorts). However, if the real bits of a calculator are arrangements of electrical circuits and not actual numbers, and if an INT is needed to interpret the calculator as computing 53+47=100, then one would presumably want to say that even in this case we’re idealizing the physical device by way of a computing idealization.

      By extension, one can interpret neural activity involved in edge detection as computing a LoG, even if the physical neural states do not actually compute a LoG. What follows? The basic idea is that the calculator itself is a device that transitions from INT (53+47) to INT (100). If this is correct, then the computational function of the calculator involves an interpretive layer, being giving an interpretation in terms of mathematical content via the function-theoretic characterization.

      Very happy to keep this discussion going. Keen to know what both you and Corey make of this!

  5. Frances Egan

    Michael – A physical system (e.g. a calculator, laptop, brain) has the capacity to compute addition just in case it has a causal organization that can be mapped to the addition function (over a sufficient range of inputs, etc.), as you spell out my view above. Whether it actually computes addition depends on whether it actually gets inputs interpretable as addends.

    You object that on my view the system only computes under an interpretation:

    “One thing that it is worth flagging here is the use of the qualifier ‘interpreted or interpretation’. What are we supposed to make of this? Either the system computes ‘addition’ or it does not. If the system only can be said to compute ‘addition’ under an interpretive lens, then this looks too weak to secure the metaphysical claim that the system actually computes ‘addition’.”

    But what else could it be for a physical system to compute a function (including LoG), beyond it having a causal organization that is interpretable as computing it? We can’t somehow force the mathematics into the world.

    There is, of course, lots of idealization (noise etc.) but that doesn’t preclude saying that the physical system literally computes addition. What is missing that would provide the additional metaphysical oomph?

    We could insist that computers (laptops, etc.) don’t really compute. But that seems perverse, and a desperate way to hang on to the idea that brains don’t really compute. Of course, we can have better evidence for the relevant mapping for a computer than a brain, but that can’t support the metaphysical distinction that you want. And if laptops and calculators don’t compute then it seems no physical systems compute. Again, if there are such physical computers, what is the extra ‘magical’ ingredient, besides the relevant causal organization?

    • Michael Kirchhoff

      Hi Frances

      I’ll give this a last go. One can interpret a system in many ways. Some correct. Others not so much. I don’t have a huge stake in whether or not a computer literally computes. My concern is really with the application of mathematical models to biology, which (as you know) I think idealizes biology in a way that is problematic (when in the hands of some neuroscientists and some philosophers of cognitive science).

      Be that as it may, it it strikes me that simply because some system has a causal organization that is interpretable as computing some function, it doesn’t follow that it computes such a function. Surely, we would need it to be the case that the system is not only interpretable as computing some function but also that it is true that it computes that specific function.

      Finally, there is nothing in The Idealized Mind that suggests that the claim that brains do not compute depends on laptops or other computing devices not computing. One thing we might possibly agree on is that computing the exact mathematical LoG isn’t something a physical computer that actually do (see my response to Corey Maley). Thinking that we somehow have to accept that computers don’t compute to arrive at the view that brains don’t compute simply isn’t part of the argumentation in The Idealized Mind. The main agenda of the book is to properly understand the use and function of idealization in the mind and brain sciences.

      Michael

Ask a question about something you read in this post.

Back to Top