The Idealized Mind (2025) argues that discussion about neural representation and neural computation is based on idealized models. This has serious implications for defending realism about neural representation and neural computation.
Egan is right to think that my critique of computational models applies more widely than to her own account of computational models (2018, 2025). Indeed, chapter 7 is devoted to a critique of Piccinini’s (2020) mechanistic account of computation (see also my response to Maley’s commentary). In The Idealized Mind, I argue that Egan confuses the map with the territory, i.e., the computational model with the target system. In her commentary, Egan argues that I’m wrong – “There is no such confusion.” She states: “The model is true of the target just in case the target instantiates the properties of the model.” In her response, including in her own work, Egan refers to Marr’s theory of vision as an illustration of mathematical content in computational models. The key example is edge detection in early vision. The computational model is a LoG: a Laplacian of a Gaussian. Here is a case where we can be certain that the computational model isn’t true, because the target doesn’t instantiate the properties of a LoG. It is incredibly useful to use LoGs in vision neuroscience because LoGs highlight features without getting distracted by irrelevant speckles, akin to how our eyes pick out shapes in a foggy scene. We see the idealizations used in a LoG by examining the formal expression of a LoG:

where I(x, y) is light intensity, G is a Gaussian smoothing kernel, and * is the convolution function. The first idealization is I(x, y), representing input to the visual system as a continuous, noise-free luminance function. The next idealization is the convolution function represented with *. The convolution function implies that a LoG is an analytical operator defined on continuous functions over an infinite domain. This is an obvious idealization, since neurons cannot perform infinite convolution integrals. The last idealization is the Laplacian squared, Ñ2, which allows for infinite precision by idealizing away stochasticity.
I agree with Egan that simplification is necessary for scientific theorizing. I also think that Egan is right to say that the computational model may well be approximately true, which is why I argue that her computational account is consistent with scientific realism. However, one cannot hang onto any “realist construal of representational vehicles” (Egan 2020, p. 26) given the idealized nature of the computational model. Hence, realism must be rejected for not only cognitive content (i.e., mental representations) but also for mathematical content (i.e., computational vehicles).
References
Egan, Frances. 2025. Deflating Mental Representation. Cambridge, MA: The MIT Press.
Egan, Frances. 2020. “A deflationary account of mental representation.” In What Are Mental Representations?, edited by Joulia Smortchkova, Krzysztof Dolega, and Tobias Schlicht, 26-53. New York: Oxford University Press.
Egan, Frances. 2018. “The nature and function of content in computational models.” In The Routledge Handbook of the Computational Mind, edited by Mark Sprevak and Matteo Colombo, 247-258. London: Routledge.
Kirchhoff, Michael. 2025. The Idealized Mind: From Model-based Science to Cognitive Science. Cambridge, MA: The MIT Press.
Piccinini, Gualtiero. 2020. Neurocognitive Mechanisms: Explaining Biological Cognition. New York: Oxford University Press.