Response to Karl Friston: Pay Attention to Spherical Cows

The Idealized Mind (2025) suggests that the free energy principle (FEP) in theoretical neuroscience unifies all the different arguments covered in the book. The FEP is a GUT in more than one way.

Friston’s commentary opens with the physicists’ spherical cow. In The Idealized Mind, chapter 9 seeks to establish that the possibility of explaining the brain and cognition by appeal to generative models and probabilistic inference requires idealizing the mind and brain. The key question Friston asks is: are there any spherical cows lurking in the FEP interpretation of the brain and self-organization? From Box 1 in Friston’s commentary, we can see that the FEP implies that the internal paths of least action minimise variational free energy (see Mann et al (2022) for a philosophically friendly user-guide to the FEP). What is crucial is what follows from this; namely, “that it looks as if internal dynamics are inferring external causes of sensory data, under certain conditions.” Is this interpretation of neural dynamics appropriate? In answering this question, Friston appeals to the renormalization group (RG). RG allows one to recover the same description of a system over multiple scales – just as the FEP allows (Kirchhoff et al. 2018). By aligning RG to the FEP, Friston argues that “at a suitable scale neuronal dynamics can be described exactly as an inference … process under a generative model.” This is correct. However, the FEP is still left with its own spherical cows. Friston’s example is: “For an astronomer, the movement of heavenly bodies (e.g., the moon) can be described exactly – with no approximation – by treating it as a spherical body.” Precisely, exact description can be achieved only by placing the heavenly bodies in a theoretical vacuum (see also Kirchhoff et al. 2025).

This supports the conclusion I draw about the FEP in the book. The FEP, it turns out, is akin to Galileo’s law of equal heights. Galileo asked us to imagine a U-shaped cavity, imagine that we put a ball on the edge of one side, and imagine that we let the ball roll down into the cavity. What is the trajectory of the ball? Galileo argued that the ball would have to reach the same height on the other side irrespective of the shape of the cavity. This is Galileo’s law of equal heights. As we know all too well, the ball’s track is not perfectly smooth and the ball faces air resistance, which in an actual experiment would not provide the answer suggested by Galileo. Yet, Galileo’s answer did not depend on actual conditions. It depended on idealized conditions. Galileo’s point was: the law of equal heights is valid in an idealized model. So, it is with the FEP. This is no discredit to the FEP. It’s in the fine company of Galileo Galilei.

References

Kirchhoff, Michael. 2025. The Idealized Mind: From Model-based Science to Cognitive Science. Cambridge, MA: The MIT Press.

Kirchhoff, Michael., Julian Kiverstein, and Ian Robertson. 2025. “The literalist fallacy and the free energy principle: Model-building, scientific realism, and instrumentalism.” The British Journal for the Philosophy of Science, Doi: 10.1086/720861

Mann, Stephen., Ross Pain., and Michael D. Kirchhoff. 2022. “Free energy: a user’s guide.” Biology and Philosophy, 37: 1-35.

Kirchhoff, Michael., Thomas Parr, Ensor Palacios, Karl Friston, and Julian Kiverstein. 2018. “The Markov blankets of life: autonomy, active inference and the free energy principle.” Journal of the Royal Society Interface 15: 20170792.

One comment

  1. As I mentioned in my response to Friston’s commentary, the primary problem is the illegitimate mapping of a physical model onto semantic phenomena. This is not a matter of idealization but of categorical misapplication.
    When the FEP claims that “internal dynamics are inferring external causes of sensory data,” it projects syntactic processes (neural activity patterns, electrochemical dynamics) onto semantic categories (inference, expectation, belief, prediction). Syntax does not become semantics through coarse-graining or scale transformation. No amount of renormalization can bridge this categorical gap.
    Galileo’s ball rolling in a U-shaped cavity remains a physical system at every level of idealization. The friction we neglect is a physical property. The air resistance we ignore is a physical property. The perfectly smooth surface is an idealized physical property. We stay within the physical domain throughout.
    But when FEP describes neural dynamics as “inference under a generative model,” we jump from the physical domain (neural activity) to the semantic domain (propositional content, intentional states, epistemic processes). This is not idealization within a category but projection across incommensurable categories.
    The ball does not “infer” anything, even under idealized conditions. Neural populations likewise do not infer, predict, or form expectations. They exhibit dynamic patterns that we choose to interpret using semantic vocabulary. But our interpretive choice does not transform the categorical nature of what is being described.
    Second Category Error: The Physics-Biology Confusion
    There is, however, a second and equally fundamental category error that has received far less attention: the illegitimate reduction of biological systems using the methods appropriate only to physical systems.
    When you reduce a non-living physical system to its components, you obtain particles, discrete elements whose properties can be specified independently and whose interactions follow additive principles. The whole is the sum of its parts, and decomposition preserves the explanatory structure.
    But when you reduce a living biological system, you do not obtain particles. You obtain an autocatalytic, recursively organized set of processes. The minimal unit of biological explanation is not a discrete component but a circular, self-maintaining loop. Life does not consist of parts that interact; it consists of processes that mutually constitute each other.
    If you attempt to reduce a biological system to particles in the way physics reduces physical systems, you destroy the system both ontologically and epistemologically. Ontologically, because the circular organization that defines life disappears the moment you break the loop. Epistemologically, because the explanatory principle, self-referential maintenance, cannot be recovered from the properties of isolated components.
    This is a rarely acknowledged form of physicalism, understood as category error. It assumes that the reductive methods successful in physics apply equally to biology. But they do not. Physical systems are organized additively; biological systems are organized recursively. Physical explanation proceeds by decomposition; biological explanation requires reconstruction of circular causality.
    How This Applies to FEP
    The FEP commits both category errors simultaneously, creating a double categorical confusion.
    First, it treats neural dynamics (a biological-physical process) as if it were Bayesian inference (a semantic-epistemic process). This is the syntax-semantics error.
    Second, it treats the brain as if it were a physical system amenable to the same reductive methods as non-living matter. It decomposes neural activity into components (neurons, populations, layers) and attempts to explain cognition by the additive interaction of these components. This is the physics-biology error.
    But the brain is not an aggregate of neurons any more than life is an aggregate of molecules. It is a recursively organized, autocatalytically stable system that maintains its own organization through circular processes. Any reduction to “components” that breaks these loops destroys what needs to be explained.
    The FEP’s appeal to “internal dynamics minimizing variational free energy” already presupposes this decomposition. It treats the brain as a thermodynamic system (physical category) that can be analyzed into states, energies, and potentials. But thermodynamic systems are not recursive systems. They do not maintain themselves through circular causality. They follow energy gradients, not autocatalytic loops.
    Why Galileo’s Analogy Fails on Both Counts
    Your invocation of Galileo’s law of equal heights is instructive, but not in the way you intend. Galileo’s idealization works because:
    1. It remains within the physical domain (kinematics, dynamics, energy)
    2. It analyzes a non-living system using legitimate physical decomposition
    3. The idealized conditions (no friction, no air resistance) are quantitative reductions of physical properties that exist in the non-idealized case
    The FEP, by contrast:
    1. Jumps from physical to semantic categories (neural dynamics to inference)
    2. Analyzes a living, recursive system using methods appropriate only to non-living, additive systems
    3. The “idealized conditions” are not quantitative reductions but categorical transformations
    When Galileo neglects friction, he removes a physical property while preserving the physical framework. When FEP describes neural dynamics as Bayesian inference, it does not remove something but adds something categorially alien: semantic interpretation.
    When Galileo decomposes the ball’s motion into position, velocity, and energy, he uses legitimate physical reduction. When FEP decomposes the brain into components that minimize free energy, it destroys the recursive organization that characterizes living, cognitive systems.

Comments are closed.

Back to Top