Consciousness and the Overton Window of Science, Part II

By Jonathan Birch

(See the other posts in this series here!)

Part II: The integrated information theory as outlier 

Part I traced the gradual entry of consciousness into the Overton window of science. I suggested that the rise of computational theories of consciousness, such as the global neuronal workspace theory, has been important in getting consciousness science a fair hearing from sceptical audiences. But these theories should be seen as part of a somewhat austere materialist project: a project that aims to dissolve rather than solve the hard problem. I want to turn now to a very different type of theory with very different aims: the integrated information theory (IIT). 

IIT is often portrayed as a “rival” or “alternative” to computational theories of consciousness, but this strikes me as a misleading description. It is approaching the topic from a completely different direction, one that rejects everything I said in my last post. Philosophers Francis Fallon and James Blackmon call it right when they remark that IIT “though a player in the debate within neuroscience over consciousness … requires profound revision, or at the very least reframing, of how we understand the nature of physical reality.”  

I had not appreciated, until recently, the influence of the idealist tradition in philosophy on IIT. Consider the preprint “Only what exists can cause: An intrinsic view of free will”, written by five of IIT’s leading lights: Giulio Tononi, the original architect of the theory, plus Larissa Albantakis, Melanie Boly, Chiara Cirelli and Christof Koch. The paper sets out the ontological commitments of IIT as clearly as I’ve seen anywhere. It is explicitly a picture on which conscious minds are the ontological bedrock from which the rest of reality is constructed

The paper contains passages such as this: 

“Because I actually exist—as a “large” intrinsic entity—the neurons of my complex cannot also exist. They cannot exist as constituents of my complex, because what actually exists is not a substrate as such but the substrate unfolded into a Ф-structure expressing its causal powers. And they cannot exist as small intrinsic entities in their own right because, if they specify a large intrinsic entity, they cannot also specify a set of smaller entities. Moreover, because my alternatives, reasons, and decisions exist within my experience—as sub-structures within an intrinsic entity—the neuronal substrates of alternatives, reasons, and decisions cannot also exist.”

The message is repeated over and over: my neurons do not truly exist. They exist in a derived, extrinsic sense—as stable appearances that other conscious observers, such as neurosurgeons, can use as handles of control over my experiences—but they do not intrinsically exist. What intrinsically exists is my conscious point of view. Tononi and colleagues explicitly contrast their “intrinsic powers ontology” with what they call the “extrinsic substrate+ ontology”, a view I would call orthodox metaphysical realism: a view on which fundamental particles are the ontological bedrock, neurons are constituted by particles, and conscious experiences are somehow (and this has always been the hard bit) produced or constituted by the activity of neurons. 

The closest philosophical precursor to IIT’s ontology, it seems to me, is the idealism of Leibniz. IIT shares with Leibniz a commitment to the idea that what exists (at least, what exists fundamentally) is conscious. But whereas Leibniz’s world consisted of conscious “monads” causally isolated from each other, harmonized by God, IIT posits that conscious beings do causally interact with each other, and indeed this causation is at the heart of the theory.  

Against this background, the question arises: what is the interface through which one conscious being can interact with another? In IIT’s language, what is the “operational basis” of consciousness? The theory provides an answer: it will be the region of the brain with maximum causal integration, or maximum “phi”. This is hypothesized to be a “posterior hot zone” at the back of the cortex. 

Tononi and colleagues explicitly caution against interpreting their view in terms of emergence. The picture is not: when a brain region achieves maximal phi, consciousness emerges. As far as I can tell, the picture is rather this: brains, bodies and neurons do not exist intrinsically at all, but brain regions of maximal phi are the way in which conscious beings manifest extrinsically in the experiences of other conscious beings. They are what another conscious being looks like from the outside. 

***** 

There is a kind of internal elegance, even beauty to the IIT picture. I can see why it has attracted followers, and indeed why the followers are so ardent. In embracing a form of idealism, it comprehensively rejects the austere materialist outlook of the rest of neuroscience. It avoids any deflation of consciousness to a second-tier ontological status, preserving the intuitively appealing idea that consciousness is real and fundamental. It represents a certain kind of release from an unpleasant set of materialist strictures that seem to downgrade consciousness and its importance. Instead, it is the brain that gets downgraded. Tononi et al. even include illustrations in which a 2D, black-and-white brain dangles off a blossoming, technicolour mind, a mere operational substrate for what truly, intrinsically exists. 

Yet, its elegance notwithstanding, I can also see why IIT has attracted trenchant critics (who tend to be more trenchant in person than in print). For these critics, the problems are not just problems of detail (e.g. maybe Phi is the wrong measure of causal integration, or maybe the posterior hot zone is not the most causally integrated brain region). The deeper problem is that all the progress made in squeezing consciousness inside the Overton window of science—progress much facilitated by the rise of computational theories of consciousness—threatens to unravel when other scientists read paragraphs like the one I quoted earlier. 

For IIT is, in effect, saying “double or quits”. Its proponents are going hard at the Overton window of science, hammering the frame, knocking out chips of wood here and there, pushing for a further expansion: they want scientists to start taking seriously not just consciousness, but an idealist ontology. If they succeed, wow—the Overton window of science has expanded not by small increments but by a huge leap. Ideas out of play for over a century are back in. But this attempt comes with risk. A wider scientific audience reading about the ontological foundations of IIT may see it as exemplifying why consciousness was shoved outside the mainstream for so long. 

***** 

I can’t predict how that story will end. For now, all I can say is that I see an awkward mismatch between the way the field wants to be seen, from the outside, and the depth of metaphysical disagreement on the inside.  

Think again about the ongoing “adversarial collaboration” between the GNW and IIT camps. This relies on the idea that the disagreements between these camps primarily concern testable matters. Is that right? I think, on the contrary, the fundamental disagreements are at the level of metaphysics/ontology. GNW is a materialist theory to the bone, while IIT is reviving early modern idealism. That is not to take sides—I’m entirely open to the idea that real insight into consciousness may come from turning to philosophical pictures that fell out of favour in the twentieth century. I just don’t think we are looking at a dispute that can be resolved by brain scans. 

What can be tested are specific conjectures people make within these separate metaphysical pictures. Within the materialist picture, one can test whether the global workspace theory, the perceptual reality monitoring theory or some other theory is true. Within the idealist picture, meanwhile, one can test IIT against other theories of how conscious minds, taken to be the ontological bedrock of reality, outwardly manifest to other observers. Admittedly, there is only one contemporary theory in this space right now, but no doubt others can be constructed if the field chooses to go down that path.  

What makes no sense, I’m afraid, is the idea of testing the underlying metaphysical pictures against each other. The idea that one might test a computational materialism against a Leibniz-inspired idealism through neuroimaging studies is a fantasy. Clashes between metaphysical pictures cannot be resolved like this. There are limits to what cognitive-neuroscience-as-usual can achieve. 

Further reading 

Albantakis, Larissa, et al. 2022. Integrated information theory (IIT) 4.0: Formulating the properties of phenomenal existence in physical terms. arXiv.  

Carruthers, Peter. 2019. Human and Animal Minds: The Consciousness Questions Laid to Rest. Oxford University Press. 

Dehaene, Stanislas. 2014. Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. Viking Press. 

Lau, Hakwan. 2022. In Consciousness We Trust: The Cognitive Neuroscience of Subjective Experience. Oxford University Press. 

Juliani, Arthur, Ryota Kanai and Shuntaro Sasai. The Perceiver architecture is a functional global workspace. In J. Culbertson, A. Perfors, H. Rabagliati & V. Ramenzoni (Eds.), Proceedings of the 44th Annual Conference of the Cognitive Science Society

Tononi, Giulio, et al. 2023. Only what exists can cause: An intrinsic view of free will. arXiv. 

One comment

  1. What makes no sense, I’m afraid, is the idea of testing the underlying metaphysical pictures against each other. The idea that one might test a computational materialism against a Leibniz-inspired idealism through neuroimaging studies is a fantasy. Clashes between metaphysical pictures cannot be resolved like this. There are limits to what cognitive-neuroscience-as-usual can achieve.

    Luckily, this must be false- due to a simple (and tremendously under-appreciated) principle called ‘the meta-problem constraint‘.

    For a theory of consciousness to be coherent, it’s not enough for it to account for the existence of ‘consciousness’. It also needs to account for the fact that we can talk about ‘consciousness’. (and more specifically: speech about the ‘Hard’ properties of consciousness had better be causally dependent on the ontological manifestation of said Hard properties in the model).

    Fundamental or not – neurons as we know them are not some thin projection we can easily cast aside. They are highly intricate objects governed by unbreakable regularities that extend all the way to the microscopic and even subatomic realm (we call these regularities ‘the laws of physics’).

    A coherent theory of consciousness must explain how the molecules, atoms, and even subatomic particles “conspire” to give rise to a system which outputs the declaration “The Hard Problem of Consciousness exists”.

    I’ve written about this in this short twitter thread for those interested:
    https://x.com/ataiiam/status/1699663893084455390?s=20

    Coincidentally, this is IIT’s cardinal failure point. IIT proposes a model in which ‘conscious minds’ exist – but (despite superficial claims to the contrary) it leaves no room for said ‘conscious minds’ to make their existence known to the ‘world of particles’. As evident by the fact that particles have exactly the same trajectories under Newton and under IIT.

    But this is not a necessary failure point of every kinematical theory of consciousness. In fact quite the contrary – you can chase this simple meta-problem constraint all the way to a naively surprising, yet robust and experimentally testable kinematically-novel theory of consciousness.

    I’ve written about this extensively. This is a 2-page primer:
    https://www.burntcircuit.blog/the-kinematical-problem-of-consciousness/

Comments are closed.

Back to Top