The Physics of ‘As If’
Karl Friston1
1 Queen Square Institute of Neurology, University of College London
Milk production at a dairy farm was low, so the farmer wrote to the local university, asking for help. A multidisciplinary team of professors was assembled, headed by a theoretical physicist, and two weeks of intensive on-site investigation took place. The scholars then returned to the university, where the task of writing the report was left to the team leader. Shortly thereafter, the physicist returned to the farm, saying to the farmer, “I have the solution, but it works only in the case of spherical cows in a vacuum.1
The physicists’ ‘spherical’ cow is an amusing place to start when considering the role of models in physics, particularly the physics of sentience afforded by the free energy principle (FEP). A careful reading of the foundational FEP literature will reveal an abundance of phrases like “as if” and “appears to”. So, what does “as if” imply for models of minds and brains? This is one of the central questions discussed by Kirchhoff in The Idealized Mind.
The notion of a ‘model’ in the FEP literature has a technical meaning. It refers to, and only to, a generative model. A generative model is just a probability distribution over the observations or sensations and their hidden or latent causes. Because the model can always be decomposed into a likelihood and a prior, the generative model is also a ‘map’ in the formal sense of a morphism from (hidden) causes to (observable) consequences. With a careful definition of what it is to be an observer—namely to possess a Markov boundary or blanket (Kirchhoff et al., 2018)—it is fairly straightforward to show that the dynamics on the interior of the blanket (e.g., brain) entails a generative model of the sensorium (the sensory sector of the Markov blanket). So, what does this mean?
It means that it looks “as if” the brain is making sense of sensory inputs under a generative model of how those inputs were caused, despite never having direct access to the cause-effect structures generating the sensorium. In what sense should we interpret “as if”? In this context, “as if” implies a mathematical equivalence that lends itself to multiple interpretations. In the FEP, this equivalence corresponds to showing that the self-organising, spatiotemporal neuronal dynamics can be read as an inference process (a.k.a., self evidencing) under a generative model or map (Hohwy, 2016). See Box 1: This is Kirchhoff’s conclusion in chapter 9. He doesn’t mean it in the joking sense of the spherical cow. He argues that generative models and the notion of the brain as engaged in probabilistic inference can only obtain by idealizing the brain. So, are there any spherical cow’s lurking in this reading or interpretation?

I think the answer is yes and no.2 To understand this we can revisit the spherical cow from the point of view of the renormalisation group. The renormalisation group allows one to formalise symmetries or invariances over scales. Crucially, one scale inherits lawfully from another by coarse-graining or compressing information to recover the same dynamics or physics at successive levels (Watson et al., 2022). This affords a description of a system at multiple scales, each perfectly apt to describe the physics at the scale in question. For example, for an astronomer, the movement of heavenly bodies (e.g., the moon) can be described exactly—with no approximation—by treating it as a spherical body, whose only attributes are position and velocity. However, if we wanted to land on the moon, one may need a finer scale of description, right down to the location of various craters and putative landing sites. The point here is that the statement that the moon moves “as if” it was a spherical body entails no approximation at the scale in question. The same applies to the FEP; namely, at a suitable scale, neuronal dynamics can be described exactly as an inference or sense-making process under a generative model (Friston, 2019). See Box 1.
The mathematical equivalence in Box 1 licences a teleological or functionalist interpretation of any brain at a suitable scale (i.e., at the scale of neuronal population or ensemble dynamics). From the perspective of the FEP, this reading is the only thing that licences the use of the word “model” or “map”. In turn, this means that all models or maps just are parts of my generative model—and possibly yours, if you are sufficiently like me. Interestingly, this includes models of you, models of me, scientific models, philosophy of science models and, of course, the FEP itself.
- Spherical cow – Wikipedia
- A technical but interesting qualification—to the above arguments—are the approxi-mations made in physics to simplify (descriptions of) things. Ubiquitous examples here are mean-field approximations, adiabatic approximations and the Laplace approxima-tion. Interestingly, the FEP can be applied under these approximations; for example, if we apply all three (mean field, adiabatic and Laplace) approximations to the dynamics in Box 1, we end up with predictive coding (Friston and Kiebel, 2009; Hohwy, 2012; Rao and Ballard, 1999) and the Bayesian brain (Clark, 2013, 2016; Knill and Pouget, 2004). Given that Bayesian inference is abductive—and there is no truth pointing—one could argue that these approximations or simplifications are part and parcel of self-evi-dencing. This is because the (log) evidence for our generative models can always be decomposed into accuracy minus complexity (Penny, 2012). Therefore, if we are self-evidencing, we may also minimise complexity, such that it “looks as if” we are making simplifying approximations.
References
Clark, A., 2013. Whatever next? Predictive brains, situated agents, and the future of cognitive science. The Behavioral and brain sciences 36, 181-204.
Clark, A., 2016. Surfing Uncertainty: Prediction, Action, and the Embodied Mind. Oxford University Press.
Friston, K., 2019. A free energy principle for a particular physics, eprint arXiv:1906.10184.
Friston, K., Da Costa, L., Sakthivadivel, D.A.R., Heins, C., Pavliotis, G.A., Ramstead, M., Parr, T., 2023. Path integrals, particular kinds, and strange things. Physics of Life Reviews 47, 35-62.
Friston, K., Kiebel, S., 2009. Predictive coding under the free-energy principle. Philosophical transactions of the Royal Society of London. Series B, Biological sciences 364, 1211-1221.
Hohwy, J., 2012. Attention and conscious perception in the hypothesis testing brain. Front Psychol 3, 96.
Hohwy, J., 2016. The Self-Evidencing Brain. Nous 50, 259-285.
Kirchhoff, M. 2025. The Idealized Mind: From Model-based Science to Cognitive Science. Cambridge, MA; The MIT Press.
Kirchhoff, M., Parr, T., Palacios, E., Friston, K., Kiverstein, J., 2018. The Markov blankets of life: autonomy, active inference and the free energy principle. J R Soc Interface 15.
Knill, D.C., Pouget, A., 2004. The Bayesian brain: the role of uncertainty in neural coding and computation. Trends Neurosci 27, 712-719.
Penny, W.D., 2012. Comparing Dynamic Causal Models using AIC, BIC and Free Energy. Neuroimage 59, 319-330.
Rao, R.P.N., Ballard, D.H., 1999. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience 2, 79-87.
Watson, J.D., Onorati, E., Cubitt, T.S., 2022. Uncomputably complex renormalisation group flows. Nature Communications 13, 7618.