Thanks to John Schwenkler for the invitation to guest-blog this week about my new book Surfing Uncertainty: Prediction, Action, and the Embodied Mind (Oxford University Press NY, 2016).
In the previous post, I spoke about the emerging view of the perceiving brain as a prediction machine. Brains like that are not cognitive couch-potatoes, passively awaiting the next waves of sensory stimulation. Instead, they are pro-active prediction engines constantly trying to anticipate the shape of the incoming sensory signal. I sketched a prominent version of this kind of story (for a fuller sketch try here). Today, I want to contrast two ways of understanding that story.
First Way: Conservative Predictive Processing (CPP)
Suppose we ask: What’s going on when a predictive processing (PP) system deals with sensory inputs? What happens, CPP says, is best seen as the selection of the hypothesis best able (given prior knowledge and current estimated sensory uncertainty) to explain the sensory data. Prediction error, on this account, signifies information not yet explained by an apt hypothesis.
This kind of vision dominated the early history of work on the predictive brain, and is visible in many recent treatments (including some of my own) too. It is not exactly wrong. But I increasingly believe that it is at least potentially misleading.
Second Way: Radical Predictive Processing (RPP)
RPP flows, it seems to me from taking some further aspects of the PP framework very seriously indeed.
The place to start is with Karl Friston’s notion of ‘active inference’. The core idea is that there are two ways for brains to match their predictions to the world. Either find the prediction that best accounts for the current sensory signal (perception) or alter the sensory signal to fit the predictions (action). If I predict I am seeing my cat, and error ensues, I might recruit a different prediction (e.g. ‘I am seeing the computer screen’). Or I might move my head and eyes so as to bring the cat (who as it happens is right here beside me on the desk) into view. Importantly, that flow of action can itself be brought about, some of this work suggests, by a select sub-set of predictions. Action is thus (see also Lotze, 1852; James, 1890) a kind of self-fulfilling prophecy (an idea that has resonances in contemporary sports-science). The resulting picture is one in which perception and action are manifestations of a single adaptive regime geared to the reduction of organism-salient prediction error.
This already hints at a beguiling picture of embodied flow in which action and perception are continuously co-constructed around the same basic computational routine. For perception and action are now locked in a circular causal embrace. Perceptual states inform actions that help elicit, from the world, the very streams of multi-modal stimulation that they predict.
This circular causal embrace is further structured by an endemic drive towards frugality. This is because the goodness of a predictive model, within PP, is a joint function of the success of the model in guiding apt actions and responses, and the frugality of the model itself (a model with less parameters is, ceteris paribus, more frugal than one with more parameters). Ultimately, this flows from a deeper imperative to maintain organismic viability at minimal (long-term average) energetic cost. But proximally, the effect is to favor the least complex (fewest parameters) models that will serve our needs.
It is at about this point, it seems to me, that CPP and RPP really start to diverge. CPP seems to suggest that action makes no fundamental difference to the basic (Helmholtz/Gregory -style) story. It claims we can (and should) still think about the prediction-error-minimizing regime in terms of neurally-generated perceptual hypotheses, and inferences to the best such hypotheses. Moreover (still according to CPP) we should see the role of the prediction-generating system – the ‘generative model’ – as that of recapitulating the causal structure of the world so that the brain becomes an ‘internal mirror of nature’. Good mirrors, one might say, make the best predictions about the shape and structure of the reflected world.
I think the ‘mirror’ picture and the ‘best hypothesis’ picture are importantly in tension with the suggestions concerning frugality and the circular causality binding perception and action. To see this, reflect that the predictive brain simultaneously drives perception and action so as to reduce organism-salient prediction error. That means action gets called upon to reduce complexity too. Consider, for example, the way some diving seabirds (gannets) predict time-to-impact according to the relative rate of expansion of the image in the optic array. Such strategies involve the use of cheaply computed cues, available only to the active organism, and selected for the control of specific types of action (such as pre-impact wing-folding).
It can sometimes seem as if these types of strategy are the polar opposites to the generative-model-based strategies highlighted by PP. A major goal of the book is to argue that this is not the case. Instead, such strategies fall neatly into place once we re-orient our thinking around the role of frugal predictions for the control of action.
Better yet, the full PP apparatus provides a potential mechanism for toggling between richer (intuitively model-based) and less rich (frugal, or ‘shallow model-based’)) strategies, according to self-estimated sensory uncertainty. Roughly, you continuously estimate which predictive strategy best reduces organism-salient sensory uncertainty here-and-now, given task and context. There are now promising explorations of such uses of ‘precision-weighting’ to accomplish strategy switching.
At the limits, precision-weighted re-balancings could temporarily select a purely feed-forward strategy, essentially switching off higher-level influence whenever raw sensory data can unambiguously specify correct response. PP thus implements (within a single processing regime) strategies that can be knowledge-rich, knowledge-sparse, and all-points-in-between. This hints at a possible reconciliation between key tenets of ecological psychology and the broader commitments of an information-processing paradigm.
There’s much more to say about all this. Of special importance is the way PP turns out (or so I argue) to deliver an affordance-based model of perception and action. But the blog-moral of the day is just that we should resist the common thought that the prediction error signal encodes the sensory information that is as-yet-unexplained, and that its role is to recruit – or help learn – a better hypothesis, or to install or activate a ‘mirror of nature’. Such glosses obscure a key effect of the whole PP framework: mashing perception, cognition, and action together within a single, action-oriented, frugality-obsessed processing regime.
What might we say instead? I think it would be better to say (and here comes the promised Radical Predictive Processing gloss) that prediction error signals sensory information that is not yet leveraged to guide a rolling set of apt engagements with the world. I mean this to be true not just when prediction error is resolved (or even resoluble) by action. Rather, the idea is that even in rare cases of ‘pure’ perception, the way the world is parsed reflects nothing so much as our ongoing needs for action and intervention. The job of the prediction error signal, I want to say, is thus to provide an organism-computable anchor for self-organizing processes apt for the control of action, motion, and environmental intervention.
By controlling (or better, by helping to control) our rolling engagements with the world, prediction-error minimizing routines become positioned to ‘fold in’ work done beyond the brain, and beyond the body. By calling upon action to resolve salient (high precision) prediction error, PP allows arbitrary amounts of work to be done by bodily form (morphology) and by the use of all manner of environmental features and ‘scaffoldings’. These range from counting on our fingers, to using an abacus or an iPhone, all the way to co-operating with other agents. (Quick aside for anyone wondering about my other work, on the so-called Extended Mind – this is why the RPP vision, although not itself an argument for extended minds, is exactly as compatible with existing arguments for the extended minds as any other dynamical, self-organizing, story).
To wrap up, I believe that the RPP ‘rebranding’ is both kind of cosmetic and kind of important. It’s cosmetic in that the (active inference) mathematics and algorithms are unaltered. So if this is what was already meant when glossing that story with talk of hypotheses and mirrors of nature, that’s fine (albeit potentially misleading). But its important in that the RPP gloss helps us see the deep complementarity between this view of what brains do and the large bodies of work that stress the complex interplay between brain, body, and world.
In the next post, I’ll be continuing the positive story about PP as an account of embodied experience, before finally turning to the dark side and examining some potential trouble spots.