The prediction error minimization (PEM) account of brain function may explain perception, learning, action, attention and understanding. That at least is what its proponents claim, and I suggested in an earlier post that perhaps the brain does nothing but minimize its prediction error. So far I haven’t talked explicitly about consciousness. Yet, if PEM is true, and if consciousness is based in brain activity, then PEM should explain consciousness too. In this post I therefore speculate about what PEM might have to say about consciousness.
We talk about consciousness in many different ways: metaphysical, neurological, psychological, colloquial. Accordingly, there are many different ways a theory like PEM could engage with consciousness.
Starting at the top, could PEM deal with the hard problem of consciousness? No. It is easy to conceive of a system that minimizes prediction error yet has no phenomenal consciousness. So consciousness does not supervene on prediction error minimization.
We can move away somewhat from these metaphysical concerns and consider things from a more theoretical point of view. A system governed fully under the free energy principle is self-organised and may be what defines a biological organism. If this (ambitious) idea is correct, then conceiving of a PEM system is conceiving of a biological organism. This at least leaves us in the right ballpark for the kinds of system that we (intuitively) accept as being conscious.
If we were to assume that PEM is sufficient for consciousness, then we would then end with ‘biopsychism’, which says that all biological organisms are conscious. Biopsychism is vastly more plausible than panpsychism. But I don’t believe PEM is sufficient for consciousness. This is partly because, phenomenally prejudiced as I am, I don’t think that very simple creatures are conscious in any way whatsoever.
This is then a problem for PEM’s reductionist aspirations: brains only do PEM and PEM is not sufficient for consciousness. I can only see one way to go here: consciousness must arise when the organism minimizes its prediction error in a certain way or to a certain degree.
Different ways and degrees of minimizing prediction error can be leveraged through perceptual inference, active inference, precision optimization, and complexity reduction. We can then look at very simple non-conscious creatures and ask how they differ in these dimensions from conscious creatures (assuming we have independent means of distinguishing these kinds of creatures).
There is a proposal like this around. Hobson and Friston (pdf) have suggested that dreaming is an adaptation where the brain is engaged in complexity reduction via synthetic prediction errors. They propose that consciousness arose as a consequence of the ability to create such inner virtual reality. This places the theory in the company of Revonsuo and Metzinger, who have proposed similar ideas albeit without the PEM machinery. It would mean that creatures who do not dream are not conscious.
I think this is an intriguing idea. It is at least as appealing as some of the other theories of consciousness out there (IIT, HOT, loops, AIR, GNWS). I don’t think it alone is going to be enough, however. One reason is that there must be something about waking perceptual inference specifically that relates to consciousness, and the dreaming theory doesn’t very strongly provide this link. A better strategy is to look at all the theories of consciousness, and identify the elements in them that are supported by different aspects of PEM (while being prepared to jettison the elements that are not). Then combine all these elements and this will then be a patchwork-style PEM theory of consciousness.
That would be a quite ambitious project, but worth undertaking. I also want to advertise a much more modest approach, which is in my view very appealing and satisfying to engage with. This approach begins by assuming that the creatures and states we are talking about are in fact conscious, and then it asks how PEM can explain the structure and content of such conscious experience.
This approach is modest not only because it assumes consciousness but also because it doesn’t say much about what distinguishes conscious from unconscious experience in the creature in question. It just lists what most of us would agree are aspects of conscious experience and asks how PEM would explain them. We might look at the first person perspective, at binding and unity, at the role of attention, at bodily self-awareness, and so on. Much of my book is taken up with this kind of approach to conscious experience but the job is far from done. There is much more to learn from PEM about our conscious lives. I think work by Anil Seth, Andy Clark and others is in this vein too.
Hopefully, some day it will all come together. We will then have a rich, systematic, empirically supported understanding of how the neural organ structures our conscious lives with prediction error minimization.