“Milk production at a dairy farm was low, so the farmer wrote to the local university, asking for help. A multidisciplinary team of professors was assembled, headed by a theoretical physicist, and two weeks of intensive on-site investigation took place. The scholars then returned to the university, where the task of writing the report was left to the team leader. Shortly thereafter, the physicist returned to the farm, saying to the farmer, “I have the solution, but it works only in the case of spherical cows in a vacuum.” (https://en.wikipedia.org/wiki/Spherical_cow)
Whilst an obvious joke, the scenario highlights how physicists often make use of wildly unrealistic—idealized—assumptions with the aim of simplifying a problem to make it easier to solve. Physics is full of idealizations such as frictionless planes, massless pulleys, uniform densities, point masses, zero gravitational effects, the list goes on. Indeed, all model-based sciences—physics, chemistry, biology, neuroscience, economics, social science, etc—
make widespread use of idealization techniques to build models of parts of reality. In model-based science one starts by introducing an imaginary system (e.g., a neural network using back-propagation to reduce error and optimize learning), which is investigated to understand learning in the brain. Or one might use an idealized gas, where all molecules are treated without mass and dimension, to approximate the behavior of natural gases.
The first core message of The Idealized Mind (Kirchhoff 2025) is that models of the mind and brain are idealized in precisely this way: they make use of idealized terms that have no counterpart in reality. Its second message is: despite being both simplified and idealized, models in cognitive science are still consistent with a version of scientific realism—the view that one actual and reasonable aim of science is to provide true (or, approximately true) descriptions of reality. In the book, I argue that these two messages have foundational implications for our understanding of the mind and brain based on scientific models. In cognitive science, ‘mental representation’ and ‘computation’ are fundamental theoretical concepts. Most neuroscientists and philosophers work with the assumption that the mind functions as an information-processing system, akin to a computer. Hence, the standard view in the field is that cognitive processes (e.g., perceiving or remembering or navigating) are realised by the occurrence, transformation and storage of information-bearing structures (mental representations) of one kind or another. All models of mental representation and neural representation are idealized models with no counterpart in reality. Ideal gases do not exist. Similarly, the brain does not literally compute; neurons do not actually send messages; nor does the brain in any literal sense process information. Constructs such as representation, computation, communication, information-processing might (and only might) be useful for modeling the mind and brain. Yet, they are nothing but abstract explanatory devices that exist mainly in the imagination of the scientific communities that use them to describe, explain, and predict aspects of the natural world.
Michael Kirchhoff’s The Idealized Mind (2025) presents an elegant but ultimately insufficient response to the foundational problems of consciousness research. His central claim is that cognitive scientific models, like those in physics, employ idealizations that have no direct counterpart in reality, yet remain consistent with scientific realism. Mental representation and neural computation, he argues, are idealized constructs comparable to frictionless planes or ideal gases. While Kirchhoff correctly identifies that these concepts lack literal correspondence to brain processes, he fails to recognize that this absence reveals not mere idealization but categorical incoherence.
The problem is not that cognitive models simplify reality. The problem is that their core concepts oscillate between incommensurable logical categories, rendering them explanatorily vacuous regardless of their idealized status.
The Disanalogy Between Physical and Cognitive Idealization
Kirchhoff’s defense rests on an analogy: just as physics employs frictionless planes and massless pulleys, cognitive science may legitimately employ idealized notions of representation and computation. But this analogy fails at a fundamental level.
When physicists speak of “frictionless planes,” the categorical status of every term is transparent. Friction is a measurable physical magnitude with clear operational definitions. Setting friction to zero is a controlled simplification within a categorially stable framework. The idealization involves quantitative reduction (friction coefficient → 0) of an already well-defined physical property. Crucially, we know exactly what is being idealized: a specific causal-mechanical resistance between surfaces.
What, by contrast, is being idealized when we speak of neural “representation” or “computation”? Is representation a physical relation between neural states and world states? A semantic relation requiring interpretation? A functional role in cognitive architecture? A phenomenal property of experience? The term systematically slides between these categories without acknowledgment.
Consider: If we say “neuron N represents stimulus S,” we can interpret this as:
• A causal-informational claim: N’s firing correlates reliably with S
• A semantic claim: N’s state has S as its content
• A functional claim: N plays the representational role in a cognitive system
• A phenomenal claim: The system experiences S when N fires
These are not different levels of grain or precision. They are different logical categories. Correlation is not semantics. Functional role is not phenomenal experience. Causal coupling is not intentionality.
Kirchhoff treats this categorical fluidity as unproblematic idealization. But idealization presupposes categorical stability. You can only idealize what you can first define within a consistent categorical framework. The physicist’s frictionless plane is an idealization because we know what friction is in non-idealized cases. But we do not know what neural “representation” is in the non-idealized case, because the concept itself is categorially unstable.
Idealization vs. Category Error
The crucial distinction Kirchhoff misses: idealization is not the same as category error.
An idealization simplifies or abstracts while preserving categorical coherence. A frictionless plane remains a physical system described in mechanical terms. The idealization removes complexity but maintains the logical form of explanation.
A category error, by contrast, treats entities or properties as belonging to different logical types than they actually possess. Attributing semantic properties to syntactic systems, treating correlations as causal relations, confusing functional descriptions with phenomenal properties, these are not idealizations but categorical confusions.
When Kirchhoff admits that “the brain does not literally compute; neurons do not send messages; nor does the brain in any literal sense process information,” he is not describing idealization. He is acknowledging that these terms are being systematically misapplied. The brain is not like a computer except simpler or more complex. It is not a computer at all, not even in idealized form.
If neurons do not “send messages,” then calling their activity “communication” is not an idealization but a metaphor elevated to theory. If the brain does not “process information” in any literal sense, then information-processing models do not describe the brain in idealized form, they describe something else entirely.
The Consequences for Explanation
This matters because without categorical consistency, models become unfalsifiable. They explain everything and nothing.
Consider Integrated Information Theory (IIT). Is Φ (phi) a physical measure, an information-theoretic quantity, a measure of causal power, or already a consciousness-quantity? Depending on which criticism arises, its defenders shift the categorical assignment. When pressed on physical implementation, Φ becomes information-theoretic. When pressed on information theory, it becomes phenomenal. When pressed on phenomenology, it becomes physical again.
This is not idealization. This is conceptual equivocation masquerading as theory. And Kirchhoff’s framework cannot diagnose it because he treats all categorical instability as legitimate idealization.
The same applies to the Free Energy Principle. Is free energy a thermodynamic magnitude, an information-theoretic measure, a variational principle, or a normative goal (“what the system ought to minimize”)? The FEP literature systematically exploits this ambiguity. When challenged on thermodynamics, it retreats to information theory. When challenged on information theory, it retreats to metaphor. When challenged on metaphor, it claims literal mechanistic description.
The categorical confusion here is so profound that it becomes visible through a simple substitution. Imagine proposing a theory of social attraction based on magnetism: humans form relationships because they are drawn together by forces analogous to magnetic attraction, social bonds exhibit polarity like magnetic poles, and community formation minimizes “social-magnetic energy.” This would be immediately recognized as absurd, not because magnetism and social behavior share no abstract structural similarities, but because the categories are incommensurable. Physical forces are not semantic relations. Magnetic attraction is not intentional bonding.
Yet this is precisely what the FEP does when it imports thermodynamic concepts (free energy, entropy) into the semantic domain of cognition and consciousness. The brain “minimizes free energy” in exactly the same way that humans “minimize social-magnetic energy”: not at all, except as metaphor. The structural parallels are mathematical artifacts, not genuine explanatory bridges. A model can be mathematically isomorphic to reality without being categorially appropriate to it.
Kirchhoff would classify this as productive scientific modeling with idealized terms. But it is not. It is the systematic exploitation of categorical instability to immunize theory against empirical constraint.
What Kirchhoff Should Demand (But Doesn’t)
A genuine response to the foundational problems of consciousness research requires not acceptance of idealization but insistence on categorical clarity as a precondition for any modeling whatsoever.
Before asking whether a model is empirically adequate or scientifically realistic, we must ask: What logical category does each core concept occupy? Is “consciousness” a causal process, an emergent property, a phenomenal quality, a functional state? Is “information” a physical measure, a semantic construct, a statistical relation, a thermodynamic quantity?
Only when these questions receive stable answers can we assess whether a model idealizes or merely equivocates. Only then can we distinguish between legitimate scientific abstraction and categorical confusion.
Kirchhoff’s framework provides no resources for this task. By treating all non-literal language as acceptable idealization, he grants categorical immunity to precisely the theories that most require categorical scrutiny.
The Methodological Alternative
The alternative is not to reject modeling or idealization but to demand categorical consistency before idealization. This means:
1. Categorical specification: Every theoretical term must be assigned a stable logical category (physical, functional, semantic, phenomenal).
2. Category preservation: Explanatory moves must preserve categorical boundaries. Correlations cannot become semantic relations. Functional roles cannot become phenomenal properties.
3. Explicit category crossing: When a theory requires connecting different categories (e.g., neural activity to phenomenal experience), this must be explicitly marked as a bridging problem, not dissolved through terminological slide.
4. Falsification through category violation: Theories that systematically violate categorical boundaries should be rejected not as empirically inadequate but as conceptually malformed.
This is not a counsel of perfection. It is the minimum requirement for explanatory coherence. Without it, theories can retrospectively adjust their categorical commitments to accommodate any observation. They become unfalsifiable not because they are too good but because they are too indefinite.
The Real Problem with Consciousness Research
Kirchhoff identifies that concepts like representation and computation lack direct empirical referents. But he draws the wrong conclusion. The problem is not that these concepts are idealized. The problem is that they are categorially incoherent.
The four systematic biases identified in contemporary consciousness research, mixing levels of description, labeling phenomena with foreign models, confusing syntax with semantics, and false reduction, are not problems that idealization can address. They are categorical confusions that idealization conceals.
When theories mix biological, psychological, and phenomenal levels of description, claiming one “produces” another, this is not idealization but category error. When they import concepts from information theory or thermodynamics without justifying their categorical transfer, this is not modeling but metaphorical projection. When they attribute semantic properties to syntactic systems, this is not abstraction but conceptual confusion.
Kirchhoff’s insistence that we can maintain scientific realism despite idealization is, in this light, beside the point. The question is not whether idealized models can be realistic. The question is whether models with unstable categorical foundations can explain anything at all.
Conclusion
Kirchhoff’s The Idealized Mind offers a sophisticated defense of cognitive scientific modeling in the face of widespread recognition that its core concepts lack literal application. But sophistication is not the same as adequacy. By treating categorical instability as acceptable idealization, Kirchhoff provides theoretical absolution to precisely the conceptual practices that render consciousness research explanatorily impotent.
The problem is not that neural representation and computation are idealized. The problem is that no one, including Kirchhoff, can specify what logical category these concepts occupy. Until that specification is provided, calling them “idealizations” is not a defense but an evasion.
Scientific realism about consciousness research cannot be salvaged by embracing idealization. It requires, first and foremost, categorical clarity. Models must specify what kind of thing they are modeling before they can claim to model it accurately, approximately, or idealistically.
Kirchhoff has shown that cognitive science relies on concepts without direct empirical referents. The proper conclusion is not that this is acceptable scientific practice. It is that contemporary consciousness research has yet to achieve the conceptual clarity necessary for genuine scientific explanation.
Idealization cannot absolve category error. And category error, not mere idealization, is the foundational problem of the philosophy of mind.