Consciousness and the Overton Window of Science, Part 1

By Jonathan Birch

(See the other posts in this series here!)

Part I: Cognitive neuroscience as usual? 

In politics, the ‘Overton window‘ is the range of positions that can safely raise their heads in public discourse. Propose something outside the window and you can expect resistance—not just to the proposal itself, but to the idea that you even merit a place in the debate, after saying what you just said. Science too has Overton windows. Sometimes positions can be so far outside the mainstream, they invite the charge that we should not even be discussing this.  

 Cold fusion is a famous example. There’s probably no surer way to get a grant proposal rejected in physics than to mention it. The philosopher Huw Price has lamented this, arguing for the importance of exploring scientific ideas that have only a tiny probability of success, if the consequences for humanity could be transformative. Price complains of a “reputation trap”, where ideas become so tainted that to even entertain them in public starts eating away at a scientist’s standing, as though they had sat down in a bath of reputational acid. The incentive structure of science favours staying well away from the bath. 

For a long time, consciousness was outside the window of psychology and neuroscience—the cold fusion of the brain—but it has gradually entered the mainstream in the last few decades. I still encounter, from time to time, a certain residual squeamishness when neuroscientists from other subfields are invited to talk about consciousness. The line “I don’t have a definition of consciousness, and I don’t like to talk about what I can’t define” has become a cliché. When I hear it, I start playing “undefined term bingo” and usually get a full card by the end of the talk. Cognition, perception, attention, interoception, learning, working memory, behaviour, inference and prediction are not easily defined either, but they are accepted as broad characterizations of domains of inquiry. The unease about consciousness is real, but the lack of an agreed definition cannot be the real source of it. I think the deep fear scientists have about consciousness is not really that it is uniquely hard to define, but rather that, if we admit it as a legitimate target of inquiry, we will be opening the door to introspective, first-person methods of the kind psychology repudiated in the early twentieth century.  

Consciousness scientists have been working hard to dispel this fear. As they make it ever clearer that they are committed to settling their disagreements with objective, third-person methods, challenges to the field’s legitimacy are starting to fade away. At the annual meeting of the Association for Scientific Study of Consciousness in New York in June, a showpiece public event reaffirmed that commitment. Two main things happened: neuroscientist Christof Koch handed philosopher David Chalmers a 1978 bottle of Madeira to settle a bet (a bet about whether the neural correlates of consciousness would be clear by 2023—they are not), and new results were announced from the Templeton-funded “adversarial collaboration” between two well-known theories, the global neuronal workspace (GNW) theory and the integrated information theory (IIT).   

The “GWT vs. IIT” study involved using multiple brain imaging techniques to assess whether the neural correlates of consciousness are in the back or the front of the brain’s cerebral cortex. The results were indecisive, I think unsurprisingly. More significant was the way the field presented itself in an event covered by Nature, Science, Scientific American and the New York Times. The message was “Here we are: a slightly playful but fundamentally serious scientific field, settling our differences not through introspection but through cutting-edge empirical methods.” 

***** 

And yet… some theories of consciousness fit into the Overton window of science far more easily than others. I’ll first think about the ones that fit most easily. Then, in Part II, I’ll think about a major outlier. 

Those that fit best are those that single out a special kind of computation in the brain as the basis of consciousness. The GNW theory, which associates consciousness with the broadcast of representations across the cerebral cortex, is the best-known example. Another contender in this space is Hakwan Lau’s perceptual reality monitoring (PRM) theory, which links consciousness to the tagging of some perceptual representations as reliable guides to the world right now, while others are rejected as unreliable noise. These theories are very much cognitive neuroscience as usual. The growing acceptance of consciousness in the wider neuroscience community is surely linked to their popularity. 

Some retort: can these computational theories really help with the hard problem—the problem of explaining the relationship between brain activity and conscious experience? For any computation in the brain, we can always ask: why should it feel like anything to perform that computation? These theories seem to give us no answer. In fact, I think they can help with the hard problem, but via an indirect route—more dissolution than solution. They help by filling in a missing piece of the “phenomenal concept strategy”, one of the main paths to materialism about consciousness.  

I’ll explain this point briefly. The phenomenal concept strategy aims to explain away dualist intuitions (about zombies, Mary, inverted qualia, etc.) as resulting from a divide between phenomenal and physical concepts—a divide that arises despite the lack of any ontological gap between phenomenal and physical facts. This story sounds a little threadbare, however, in the absence of any scientific account of how this conceptual bifurcation might arise.  

The GNW and PRM theories both suggest plausible accounts. For they both associate consciousness with a type of gating mechanism in the brain, a bottleneck that lets some representations through and screens out others. This bottleneck lies downstream of perception but upstream of conceptual thought, regulating which representations get that far and which do not. This allows us to see how we could form a special type of concept through a kind of “internal pointing” at the special representations that make it through the gates: a concept of that sort of thing. We will always be able to conceive of the possibility that “that sort of thing” is profoundly different from anything described by our scientific theories, so dualist intuitions will always be with us. But they can be debunked—explained in a way that nowhere presumes their truth1.

But while these theories slot neatly into the materialist project, they also have an odd, rather troubling feature: the computational architectures they describe can be very easily recreated in computer hardware. Google DeepMind’s “Perceiver” architecture already has many, though not all, of the features of a global workspace, as other AI researchers have noted. Meanwhile, perceptual reality monitoring just requires sensory processing plus a second unit that looks in on the sensory processing, classifying the sensory representations as reliable representations of the present external world, as internally generated, or as noise. As Lau acknowledges, the whole theory is substantially inspired by generative adversarial networks in machine learning, in which a “discriminator” network is tasked with classifying the output images of a “generator” network as either self-generated or externally generated. 

Is this a problem? Maybe not. Crucially, it does not require us to jump to the conclusion that these simple AI systems are conscious. We can instead see the situation like this: these theories aim to give us the resources to debunk our dualist intuitions about our own minds. Once we cut loose those intuitions, the question “Is AI conscious?” is greatly deflated. AI may indeed have the sort of gating mechanism that, in us, makes a representation accessible to internal pointing. But so what? We are no longer associating any special properties (qualia, etc.) with that sort of access. A really committed materialist is one who says there are computations, and biological computers falsely thinking themselves special for reasons we can explain in computational terms, and that’s the whole story2

In Part II, I’ll turn to a theory that fits much less easily into the Overton window of science: the integrated information theory. 

  1. Peter Carruthers, in Human and Animal Minds (2019), develops this line of thought in relation to the global workspace theory. ↩︎
  2. In recent literature this way of thinking about the mind-body problem has sometimes been called “illusionism”, but the new name suggests a larger discontinuity with earlier statements of materialism than I think is really there. ↩︎

6 Comments

  1. Thanks Jonathan for this great post! I’m very much looking forward to the next part.

    Just a quick clarification about PRM, I don’t think it can be ‘very easily recreated in computer hardware’. The impression that it’s easy might be related to a common confusion about functionalist theories. The confusion stems from a failure to distinguish between core realization and total realization—a distinction made by Shoemaker in 1981 (“Some varieties of functionalism”).

    You’re right to note that the core realization of PRM could be somewhat easily recreated in computer hardware, though as we discussed when we co-wrote the AI report (https://arxiv.org/abs/2308.08708), it’s still far from trivial.

    But as Shoemaker noted early on, functionalists shouldn’t say that the core realization is sufficient for consciousness. They should say that the total realization is sufficient. Basically, the total realization is the core realization (the basic mechanism we think is responsible for consciousness) together with background conditions.

    Take, for instance, the property of being a door. Wikipedia says that “A door is a hinged or otherwise movable barrier that allows entry into and exit from an enclosure.” If you put a door in an open field, there’s a sense in which it’s still a realization of the relevant property—a core realization. But it doesn’t perform its function (allowing entry into and exit from an enclosure) unless it’s embedded within a larger system. This is what’s required for total realization.

    I think functionalists about consciousness should insist that we consider total realization, and not core realization. If you implement a PRM-like mechanism in a computer, sure, there’s a sense in which you might have a core realization of perceptual reality monitoring. But that’s not sufficient for consciousness. What’s sufficient is a perceptual reality monitor that performs the right function within a system. And just as for realizing the function of being a door, not all systems will do. In the case of PRM, Hakwan and I have argued that the perceptual reality monitor should output to general belief-formation and rational decision-making systems (https://philpapers.org/archive/MICHTD-2.pdf). Otherwise the reality monitoring function just isn’t realized (in the sense of ‘total realization’).

    Perhaps once we add this ‘total realization’ constraint functionalist theories of consciousness are less trivially implemented. I’m sure the same thing could be said for other functionalist theories like global workspace.

    • Jonathan Birch

      Thanks Matthias. No disagreement I think! The PRM mechanism in the human brain is posited to be outputting reliable representations for use in forming beliefs and in reasoning; the ease of recreating that whole arrangement in AI will depend on the ease of creating systems that believe and reason.

      It’s nonetheless striking that if you ask: “Suppose I’ve got here an AI system that can believe and reason. What do I need to add to make it conscious?” The answer turns out to be “Not much!”

      The point I wanted to make concerns the aim of computational theories of consciousness like this. Are they aiming to solve the hard problem as traditionally posed, or are they aiming to give us resources with which to debunk dualist intuitions? I think the latter. If they can achieve that aim, then questions about AI consciousness will be deflated, in at least this sense: we should not find it particularly surprising or amazing any more to find the consciousness architecture can be recreated in computers. The appropriate reaction would not be “wow!” but “well, of course…”.

      (And so I don’t see IIT as having the same aim as these theories. More on this tomorrow.)

  2. GNW and PRM theories are about how the brain may be working to generate consciousness.
    Are they the best threads to understand human consciousness?
    Paraphrasing Feynman we could say that “If I cannot know how consciousness has been fabricated during evolution, then I do not understand it”.
    Human consciousness is a result of primate evolution. It has been fabricated during our pre-human evolution. Let’s say during the last 7 million years, during the pan-homo split.
    Perhaps looking at how human consciousness may have been fabricated during primate evolution could be a path leading to some understanding. More on that at https://philpapers.org/archive/MENEOS-5.pdf. (inside/outside the Overton window?)

  3. Jordi Galiano-Landeira

    First of all, thanks Jonathan for this clear explanation of an “Overton Window” and how GNW and PRM are theories that fit in this window and that they might be helpful to “dissolve” the hard problem of consciousness. However, I believe that both GNW and PRM are not debunking dualist point of views. They are just bringing into the table other physicalists approaches to consciousness. Do not get me wrong, they are totally necessary. But as you said here:

    “[W]e will always be able to conceive of the possibility that “that sort of thing” is profoundly different from anything described by our scientific theories, so dualist intuitions will always be with us. But they can be debunked—explained in a way that nowhere presumes their truth”

    At the end of the day, it will be a matter of a trend, i.e., how many people are dualists/physicalists or how easy is to convince someone of dualism/physicalism. But this does not make one or the other a better truth, just one that is more accepted.

    Thrilled to see your discussion and criticism to IIT1

  4. Dennis Polis

    While I agree with Jonathan Birch about the difficulty imposed on the acceptance of rational analysis imposed by the Overton window, they should not deter one from publishing the logical implications of data.

    notes others inability to define consciousness, then goes on to tell us how this undefined label can be replaced by descriptions of neural mechanisms. As Socrates continually pointed out, if you can’t define your terms, you do not know what you are talking about.

    The post

  5. Dennis Polis

    While I agree with Jonathan Birch about the difficulty imposed on the acceptance of rational analysis imposed by the Overton window, they should not deter one from publishing the logical implications of data.

    He notes the common inability to define consciousness, then goes on to tell us how this undefined something can be reduced neural mechanisms. As Socrates pointed out, if you can’t define your terms, you don’t know what you are talking about.

    I have no problem with investigations of neural data processing, but I do object to ignoring the acts of understanding which allow investigators to proceed.
    A representation, of any recursive order, is not knowledge. In other words, having data is not knowing data. The brain is replete with representations that do not engender awareness. The post commits most of the fundamental errors I pointed out in my ““The Hard Problem of Consciousness & the Fundamental Abstraction,” Journal of Consciousness Exploration & Research, 14, no. 2 (January 2023): 96-114.
    (1) The post employs a Cartesian conceptual space unsuited to the unified psychosomatic activity called “thought.” (2) In failing to define subjective consciousness, it fails to see that that it lacks a physical definition and so leaves us with no logical means of reducing it to physical principles. (3) It fails to see that natural science is (justifiably) based on a fundamental abstraction, namely the decision to focus on the object to the exclusion of the inseparable subject in the subject-object relation of knowing. We care about what Galileo saw, not what he experienced in seeing it. This decision leaves natural science bereft of the data and concepts necessary to understand consciousness, which is our awareness of objective contents. None of this makes intentional reality “spooky” or “supernatural.” Rather, it shows that the third person perspective is as limited as the first person perspective. Both produce dimensionally diminished maps of reality.

Comments are closed.

Back to Top