Intro to Michael Kirchhoff’s: The Idealized Mind

“Milk production at a dairy farm was low, so the farmer wrote to the local university, asking for help. A multidisciplinary team of professors was assembled, headed by a theoretical physicist, and two weeks of intensive on-site investigation took place. The scholars then returned to the university, where the task of writing the report was left to the team leader. Shortly thereafter, the physicist returned to the farm, saying to the farmer, “I have the solution, but it works only in the case of spherical cows in a vacuum.” (https://en.wikipedia.org/wiki/Spherical_cow)

Whilst an obvious joke, the scenario highlights how physicists often make use of wildly unrealistic—idealized—assumptions with the aim of simplifying a problem to make it easier to solve. Physics is full of idealizations such as frictionless planes, massless pulleys, uniform densities, point masses, zero gravitational effects, the list goes on. Indeed, all model-based sciences—physics, chemistry, biology, neuroscience, economics, social science, etc—
make widespread use of idealization techniques to build models of parts of reality. In model-based science one starts by introducing an imaginary system (e.g., a neural network using back-propagation to reduce error and optimize learning), which is investigated to understand learning in the brain. Or one might use an idealized gas, where all molecules are treated without mass and dimension, to approximate the behavior of natural gases.

The first core message of The Idealized Mind (Kirchhoff 2025) is that models of the mind and brain are idealized in precisely this way: they make use of idealized terms that have no counterpart in reality. Its second message is: despite being both simplified and idealized, models in cognitive science are still consistent with a version of scientific realism—the view that one actual and reasonable aim of science is to provide true (or, approximately true) descriptions of reality. In the book, I argue that these two messages have foundational implications for our understanding of the mind and brain based on scientific models. In cognitive science, ‘mental representation’ and ‘computation’ are fundamental theoretical concepts. Most neuroscientists and philosophers work with the assumption that the mind functions as an information-processing system, akin to a computer. Hence, the standard view in the field is that cognitive processes (e.g., perceiving or remembering or navigating) are realised by the occurrence, transformation and storage of information-bearing structures (mental representations) of one kind or another. All models of mental representation and neural representation are idealized models with no counterpart in reality. Ideal gases do not exist. Similarly, the brain does not literally compute; neurons do not actually send messages; nor does the brain in any literal sense process information. Constructs such as representation, computation, communication, information-processing might (and only might) be useful for modeling the mind and brain. Yet, they are nothing but abstract explanatory devices that exist mainly in the imagination of the scientific communities that use them to describe, explain, and predict aspects of the natural world.

3 Comments

  1. Dr. Wolfgang Stegemann

    Michael Kirchhoff’s The Idealized Mind (2025) presents an elegant but ultimately insufficient response to the foundational problems of consciousness research. His central claim is that cognitive scientific models, like those in physics, employ idealizations that have no direct counterpart in reality, yet remain consistent with scientific realism. Mental representation and neural computation, he argues, are idealized constructs comparable to frictionless planes or ideal gases. While Kirchhoff correctly identifies that these concepts lack literal correspondence to brain processes, he fails to recognize that this absence reveals not mere idealization but categorical incoherence.
    The problem is not that cognitive models simplify reality. The problem is that their core concepts oscillate between incommensurable logical categories, rendering them explanatorily vacuous regardless of their idealized status.
    The Disanalogy Between Physical and Cognitive Idealization
    Kirchhoff’s defense rests on an analogy: just as physics employs frictionless planes and massless pulleys, cognitive science may legitimately employ idealized notions of representation and computation. But this analogy fails at a fundamental level.
    When physicists speak of “frictionless planes,” the categorical status of every term is transparent. Friction is a measurable physical magnitude with clear operational definitions. Setting friction to zero is a controlled simplification within a categorially stable framework. The idealization involves quantitative reduction (friction coefficient → 0) of an already well-defined physical property. Crucially, we know exactly what is being idealized: a specific causal-mechanical resistance between surfaces.
    What, by contrast, is being idealized when we speak of neural “representation” or “computation”? Is representation a physical relation between neural states and world states? A semantic relation requiring interpretation? A functional role in cognitive architecture? A phenomenal property of experience? The term systematically slides between these categories without acknowledgment.
    Consider: If we say “neuron N represents stimulus S,” we can interpret this as:
    • A causal-informational claim: N’s firing correlates reliably with S
    • A semantic claim: N’s state has S as its content
    • A functional claim: N plays the representational role in a cognitive system
    • A phenomenal claim: The system experiences S when N fires
    These are not different levels of grain or precision. They are different logical categories. Correlation is not semantics. Functional role is not phenomenal experience. Causal coupling is not intentionality.
    Kirchhoff treats this categorical fluidity as unproblematic idealization. But idealization presupposes categorical stability. You can only idealize what you can first define within a consistent categorical framework. The physicist’s frictionless plane is an idealization because we know what friction is in non-idealized cases. But we do not know what neural “representation” is in the non-idealized case, because the concept itself is categorially unstable.
    Idealization vs. Category Error
    The crucial distinction Kirchhoff misses: idealization is not the same as category error.
    An idealization simplifies or abstracts while preserving categorical coherence. A frictionless plane remains a physical system described in mechanical terms. The idealization removes complexity but maintains the logical form of explanation.
    A category error, by contrast, treats entities or properties as belonging to different logical types than they actually possess. Attributing semantic properties to syntactic systems, treating correlations as causal relations, confusing functional descriptions with phenomenal properties, these are not idealizations but categorical confusions.
    When Kirchhoff admits that “the brain does not literally compute; neurons do not send messages; nor does the brain in any literal sense process information,” he is not describing idealization. He is acknowledging that these terms are being systematically misapplied. The brain is not like a computer except simpler or more complex. It is not a computer at all, not even in idealized form.
    If neurons do not “send messages,” then calling their activity “communication” is not an idealization but a metaphor elevated to theory. If the brain does not “process information” in any literal sense, then information-processing models do not describe the brain in idealized form, they describe something else entirely.
    The Consequences for Explanation
    This matters because without categorical consistency, models become unfalsifiable. They explain everything and nothing.
    Consider Integrated Information Theory (IIT). Is Φ (phi) a physical measure, an information-theoretic quantity, a measure of causal power, or already a consciousness-quantity? Depending on which criticism arises, its defenders shift the categorical assignment. When pressed on physical implementation, Φ becomes information-theoretic. When pressed on information theory, it becomes phenomenal. When pressed on phenomenology, it becomes physical again.
    This is not idealization. This is conceptual equivocation masquerading as theory. And Kirchhoff’s framework cannot diagnose it because he treats all categorical instability as legitimate idealization.
    The same applies to the Free Energy Principle. Is free energy a thermodynamic magnitude, an information-theoretic measure, a variational principle, or a normative goal (“what the system ought to minimize”)? The FEP literature systematically exploits this ambiguity. When challenged on thermodynamics, it retreats to information theory. When challenged on information theory, it retreats to metaphor. When challenged on metaphor, it claims literal mechanistic description.
    The categorical confusion here is so profound that it becomes visible through a simple substitution. Imagine proposing a theory of social attraction based on magnetism: humans form relationships because they are drawn together by forces analogous to magnetic attraction, social bonds exhibit polarity like magnetic poles, and community formation minimizes “social-magnetic energy.” This would be immediately recognized as absurd, not because magnetism and social behavior share no abstract structural similarities, but because the categories are incommensurable. Physical forces are not semantic relations. Magnetic attraction is not intentional bonding.
    Yet this is precisely what the FEP does when it imports thermodynamic concepts (free energy, entropy) into the semantic domain of cognition and consciousness. The brain “minimizes free energy” in exactly the same way that humans “minimize social-magnetic energy”: not at all, except as metaphor. The structural parallels are mathematical artifacts, not genuine explanatory bridges. A model can be mathematically isomorphic to reality without being categorially appropriate to it.
    Kirchhoff would classify this as productive scientific modeling with idealized terms. But it is not. It is the systematic exploitation of categorical instability to immunize theory against empirical constraint.
    What Kirchhoff Should Demand (But Doesn’t)
    A genuine response to the foundational problems of consciousness research requires not acceptance of idealization but insistence on categorical clarity as a precondition for any modeling whatsoever.
    Before asking whether a model is empirically adequate or scientifically realistic, we must ask: What logical category does each core concept occupy? Is “consciousness” a causal process, an emergent property, a phenomenal quality, a functional state? Is “information” a physical measure, a semantic construct, a statistical relation, a thermodynamic quantity?
    Only when these questions receive stable answers can we assess whether a model idealizes or merely equivocates. Only then can we distinguish between legitimate scientific abstraction and categorical confusion.
    Kirchhoff’s framework provides no resources for this task. By treating all non-literal language as acceptable idealization, he grants categorical immunity to precisely the theories that most require categorical scrutiny.
    The Methodological Alternative
    The alternative is not to reject modeling or idealization but to demand categorical consistency before idealization. This means:
    1. Categorical specification: Every theoretical term must be assigned a stable logical category (physical, functional, semantic, phenomenal).
    2. Category preservation: Explanatory moves must preserve categorical boundaries. Correlations cannot become semantic relations. Functional roles cannot become phenomenal properties.
    3. Explicit category crossing: When a theory requires connecting different categories (e.g., neural activity to phenomenal experience), this must be explicitly marked as a bridging problem, not dissolved through terminological slide.
    4. Falsification through category violation: Theories that systematically violate categorical boundaries should be rejected not as empirically inadequate but as conceptually malformed.
    This is not a counsel of perfection. It is the minimum requirement for explanatory coherence. Without it, theories can retrospectively adjust their categorical commitments to accommodate any observation. They become unfalsifiable not because they are too good but because they are too indefinite.
    The Real Problem with Consciousness Research
    Kirchhoff identifies that concepts like representation and computation lack direct empirical referents. But he draws the wrong conclusion. The problem is not that these concepts are idealized. The problem is that they are categorially incoherent.
    The four systematic biases identified in contemporary consciousness research, mixing levels of description, labeling phenomena with foreign models, confusing syntax with semantics, and false reduction, are not problems that idealization can address. They are categorical confusions that idealization conceals.
    When theories mix biological, psychological, and phenomenal levels of description, claiming one “produces” another, this is not idealization but category error. When they import concepts from information theory or thermodynamics without justifying their categorical transfer, this is not modeling but metaphorical projection. When they attribute semantic properties to syntactic systems, this is not abstraction but conceptual confusion.
    Kirchhoff’s insistence that we can maintain scientific realism despite idealization is, in this light, beside the point. The question is not whether idealized models can be realistic. The question is whether models with unstable categorical foundations can explain anything at all.
    Conclusion
    Kirchhoff’s The Idealized Mind offers a sophisticated defense of cognitive scientific modeling in the face of widespread recognition that its core concepts lack literal application. But sophistication is not the same as adequacy. By treating categorical instability as acceptable idealization, Kirchhoff provides theoretical absolution to precisely the conceptual practices that render consciousness research explanatorily impotent.
    The problem is not that neural representation and computation are idealized. The problem is that no one, including Kirchhoff, can specify what logical category these concepts occupy. Until that specification is provided, calling them “idealizations” is not a defense but an evasion.
    Scientific realism about consciousness research cannot be salvaged by embracing idealization. It requires, first and foremost, categorical clarity. Models must specify what kind of thing they are modeling before they can claim to model it accurately, approximately, or idealistically.
    Kirchhoff has shown that cognitive science relies on concepts without direct empirical referents. The proper conclusion is not that this is acceptable scientific practice. It is that contemporary consciousness research has yet to achieve the conceptual clarity necessary for genuine scientific explanation.
    Idealization cannot absolve category error. And category error, not mere idealization, is the foundational problem of the philosophy of mind.

    • Michael Kirchhoff

      Dear Dr. Stegemann,

      Many thanks for taking time to comment to richly on the book. All of the comments are most welcome, even if I disagree with some of them. I’ll set the issue about consciousness aside here, since it is simply not a topic I engage with in the book. I’ll try my best to address each issue you raise:

      • While Kirchhoff correctly identifies that these concepts lack literal correspondence to brain processes, he fails to recognize that this absence reveals not mere idealization but categorical incoherence.

      I want to say that not all idealizations are useful idealizations. While some are, others are most certainly misleading. Example, in chapter 6, I argue that the claim that edge detection in early vision cannot be literally understood as neurons computing the Laplacian of a Gaussian (LoG). Stay tuned for my responses to Frances Egan and Corey Maley during the week. A LoG cannot be computed by a population of neurons. Indeed, it cannot even be computed by a physical computer. Nevertheless, LoGs are widely used in vision neuroscience and for good reasons. Here is a case where one might treat neural populations as if they compute a LoG, even if we can’t actually infer this to be the case. As long as we’re clear about this, the idealization is useful. Now, if one either ignores this or simply don’t know LoGs to consist of numerous idealizations, then the attributions of LoGs to the brain would be a misleading idealization. You might call this a category mistake. I would think it best to consider it a misleading idealization given that LoGs are based on a set of essential idealizations.

      • The problem is not that cognitive models simplify reality. The problem is that their core concepts oscillate between incommensurable logical categories, rendering them explanatorily vacuous regardless of their idealized status.

      Agree that there is no issue with models that idealize and therefore allow for simplicity. I don’t think it’s fair to say that cognitive models and their concepts ‘oscillate between incommensurable logical categories.’ My analysis suggest that when, say, models to neural representation are invoked to explain neural and/or cognitive properties on the basis of work in theoretical neuroscience, then such models involve what I should have called a semantification idealization: Many claims about the representational capacities of the neural populations move from patterns of covariation or discriminability to full-blown claims about semantic content, i.e., intentionality. From the fact that firing rates correlate with some variable, neuroscientists (and philosophers) often conclude that the firing rates represent that variable. This idealizes a purely structural or statistical relation into a semantic relation. It this also a category mistake? It is, on the one hand. However, it is also a case where we are using idealization in a misleading way. This is partly what I argue in the book.

      • Kirchhoff’s defense rests on an analogy: just as physics employs frictionless planes and massless pulleys, cognitive science may legitimately employ idealized notions of representation and computation. But this analogy fails at a fundamental level.
      When physicists speak of “frictionless planes,” the categorical status of every term is transparent. Friction is a measurable physical magnitude with clear operational definitions. Setting friction to zero is a controlled simplification within a categorially stable framework. The idealization involves quantitative reduction (friction coefficient → 0) of an already well-defined physical property. Crucially, we know exactly what is being idealized: a specific causal-mechanical resistance between surfaces. What, by contrast, is being idealized when we speak of neural “representation” or “computation”? Is representation a physical relation between neural states and world states? A semantic relation requiring interpretation? A functional role in cognitive architecture? A phenomenal property of experience? The term systematically slides between these categories without acknowledgment.

      I think this much is entirely correct. You’ll notice that I make this kind of argument in the book by explicitly stating that there is a disanalogy between the use of idealization in areas such as physics and the use of idealization in theoretical neuroscience, philosophy of mind and philosophy of cognitive science. In the computational case, what is being idealized is the realization of a computational function, e.g., normalization. Or, in neural coding, what is being idealized might be a particular representational format, e.g., a spike train. In the representational case, what is being idealized is semantics at the neuronal level.

      • Kirchhoff treats this categorical fluidity as unproblematic idealization. But idealization presupposes categorical stability. You can only idealize what you can first define within a consistent categorical framework. The physicist’s frictionless plane is an idealization because we know what friction is in non-idealized cases. But we do not know what neural “representation” is in the non-idealized case, because the concept itself is categorially unstable.

      One of the key arguments in the book is to highlight that philosophers of cognitive science have been working (and I’m inclined to say unknowingly or deliberately ignoring) with idealized models. You are entirely right that neural ‘representation’ as a concept is unstable (just see recent work by Machery & Favela). Even so, if one looks at models of neural representation, regardless of it being in computational neuroscience or philosophy of cognitive science, it is evident that several core assumptions of these models are unrealistic in the sense associated with idealization. From my perspective, we do know actually know what neural ‘representation’ is in the non-idealized case. It is not any kind of representation. The issue I’m trying to put my finger on is that in the sciences of mind and brain, there is an unjustified slide from models to phenomena, and where this slide involves attributing to target phenomena certain elements they do not have, since these elements are idealized.

      • When Kirchhoff admits that “the brain does not literally compute; neurons do not send messages; nor does the brain in any literal sense process information,” he is not describing idealization. He is acknowledging that these terms are being systematically misapplied. The brain is not like a computer except simpler or more complex. It is not a computer at all, not even in idealized form. If neurons do not “send messages,” then calling their activity “communication” is not an idealization but a metaphor elevated to theory. If the brain does not “process information” in any literal sense, then information-processing models do not describe the brain in idealized form, they describe something else entirely.

      My answer will be the same as above: there is a sharp separation between useful and misleading idealization. Computational techniques are in my view useful as modeling tools, as long as we don’t attribute computational functions to nervous systems. If we do the latter, then we’re in the game of using misleading idealizations. Is there a real difference between this and something like the ideal gas law? If we use the idealized assumptions to explain or predict the behavior of natural gases, the idealizations are used in a way that is beneficial. However, were one to attribute these idealizations to natural gases, one would be mistaken and therefore using the idealizations in a misleading way. This also speaks to the issue raised by Dr. Stegemann that ‘he [Kirchhoff] treats all categorical instability as legitimate idealization’. I do not, as I have made explicit above.

      • A genuine response to the foundational problems of consciousness research requires not acceptance of idealization but insistence on categorical clarity as a precondition for any modeling whatsoever. Before asking whether a model is empirically adequate or scientifically realistic, we must ask: What logical category does each core concept occupy? Is “consciousness” a causal process, an emergent property, a phenomenal quality, a functional state? Is “information” a physical measure, a semantic construct, a statistical relation, a thermodynamic quantity? … The alternative is not to reject modeling or idealization but to demand categorical consistency before idealization. This means:
      1. Categorical specification: Every theoretical term must be assigned a stable logical category (physical, functional, semantic, phenomenal).
      2. Category preservation: Explanatory moves must preserve categorical boundaries. Correlations cannot become semantic relations. Functional roles cannot become phenomenal properties.
      3. Explicit category crossing: When a theory requires connecting different categories (e.g., neural activity to phenomenal experience), this must be explicitly marked as a bridging problem, not dissolved through terminological slide.
      4. Falsification through category violation: Theories that systematically violate categorical boundaries should be rejected not as empirically inadequate but as conceptually malformed.

      I’m sympathetic to this issue. Although it is something that is missing in The Idealized Mind, it is work that I undertake in my accepted and forthcoming book The Idealized Brain: Uniting Philosophy of Science and Computational Neuroscience (MIT Press). It is also work that it as the core of a Future Fellowship ship application with the Australian Research Council, where the aim is to build decision protocols, with evaluation criteria along the lines you suggest above, for working with idealization in computational neuroscience and philosophy of cognitive science.

      • Kirchhoff’s insistence that we can maintain scientific realism despite idealization is, in this light, beside the point. The question is not whether idealized models can be realistic. The question is whether models with unstable categorical foundations can explain anything at all.

      On the latter, I’ll refer you to a recent publication of mine in Philosophical Psychology (2025) entitled ‘Idealization and Mental Fictionalism’, in which I argue that idealized models of neural representation fail to be explanatory. On the former, I disagree. There is a large literature in philosophy of science that suggests that idealization puts scientific realism under pressure. There is also a literature in philosophy of cognitive science that highlights that idealized modeling results in a version of scientific anti-realism. One of my aims in the book is to combat these unjustified inferences.

  2. Dear Dr. Kirchhoff,

    Thank you for your detailed response.
    I’d like to offer a concrete illustration of what I see as the categorical problem that goes beyond the useful/misleading distinction. Consider Predictive Processing. Here the difficulty is not simply that the model uses idealized terms, but that it is built on a fundamentally inappropriate categorical analogy.
    The Free Energy Principle imports thermodynamic and information-theoretic concepts (free energy, entropy) and applies them to semantic and intentional processes (expectation, prediction, inference, belief updating). This is analogous to proposing a theory of social bonds based on magnetism: humans form relationships because they are drawn together by forces comparable to magnetic attraction, social groups exhibit polarity like magnetic poles, and community formation minimizes “social-magnetic potential energy.”
    No one would take such a theory seriously, not because magnetism and social behavior share no abstract structural similarities (they do: both involve attraction, both can be modeled with differential equations, both exhibit stable and unstable configurations), but because the categories are incommensurable. Physical forces are not semantic relations. Magnetic attraction is not intentional bonding. The mathematical isomorphism is an artifact, not an explanatory bridge.
    Yet this is precisely what happens when thermodynamic free energy is mapped onto cognitive “prediction error.” The brain does not “minimize free energy” in any thermodynamic sense. It does not “form expectations” or “update beliefs” in any semantic sense that neurons could instantiate. These are not idealizations of something the brain does, they are projections of categories that do not apply.
    Moreover, the Bayesian apparatus itself involves a categorical sleight of hand: the capacity to model probabilities, which requires an observer with a probability space and epistemic access to evidence, is displaced into the organism. But neurons do not have probability distributions “in” them any more than they have semantic content “in” them. The Bayesian framework is a modeling tool that requires an interpreting subject, not a mechanism that brains literally implement.
    What can one do with such a model? It does not clarify the question of how cognition works, because it systematically confuses the categories through which we model cognition with the categories appropriate to the phenomenon itself. This is not a matter of useful versus misleading idealization. The model is not idealizing something we understand in a simpler form. It is applying categories from one domain (thermodynamics, information theory, Bayesian probability) to another domain (intentional, semantic, experiential processes) where they have no literal purchase.
    You write: “From my perspective, we do [not] actually know what neural ‘representation’ is in the non-idealized case.” But if we do not know what the phenomenon is in the non-idealized case, how can we claim to be idealizing it? Idealization presupposes that we know what we are simplifying. Without that knowledge, we are not idealizing but projecting categories onto phenomena whose nature remains unclear to us.
    This is my core concern: that contemporary cognitive neuroscience operates with models whose categorical foundations are not merely idealized but fundamentally mismatched to their target phenomena, and that calling this “idealization” obscures rather than illuminates the problem.

Leave a Reply to Wolfgang StegemannCancel reply

Back to Top