Computational Modeling and Consciousness: Commentary on Bridewell and Isaac

By Matthias Michel, Center for Mind, Brain and Consciousness, New York University

(View all the posts in this series here.)

Bridewell & Isaac’s idea is ‘refutation by implementation.’ If a model implements a theory of consciousness, that theory is wrong: “the ability to encode the core principles of a theory of consciousness within a computational model provides evidence that the theory is incomplete.” (2021; p.5) Their view is ‘methodologically computationalist’—we should aim to construct computational models of consciousness-relevant phenomena. But it is agnostic about metaphysics. In particular, it seems important to Bridewell & Isaac that the research program avoids ‘reductive computational functionalism’—the view that performing the right kind of computations over the right kind of representations is necessary and sufficient for consciousness.

What’s wrong with reductive computational functionalism? Bridewell & Isaac write: “If a modeler directly identifies consciousness with intersubjectively available phenomena they can validate their model as conscious. However, by stipulating that “consciousness” is intersubjectively accessible after all, they thereby sever any evidential link between empirical data and our pre-theoretical understanding of the explanandum of consciousness science.” (p.3)

Here’s the worry, as I interpret it. Take a look at some of the claims made by functionalists: they say that consciousness is all about global broadcast (Mashour et al. 2020), internal monitoring (Lau, 2022), and so on. Now, take a look at your experience (using your mind’s eye, no less). Here’s something you could ask the functionalists: “you’ve told me about cognitive access, global broadcast, metacognition, internal monitoring, and all these functional facts, but what does it have to do with this subjective phenomenon? You’re just changing the subject!”

That’s a legitimate worry in some cases. But I don’t think functionalists are changing the subject. There’s a sense in which consciousness is intersubjectively accessible because the brain processes implementing consciousness-functions are intersubjectively accessible. Nevertheless, functionalists can maintain that one also has a different kind of access to consciousness—a different way of thinking about it, by way of applying phenomenal concepts. Since (at least some) phenomenal concepts can only be possessed from a particular point of view, one might hold that the “subjectivity of phenomenal facts arises from subjectivity of phenomenal concepts”, as noted by Lee (2022). And as philosophy professors never tire of pointing out, one can have two concepts that refer to the same thing without knowing that they pick out the same thing. Empirical evidence is relevant because it might help one realize that those two concepts actually pick out the same thing. As such, identifying consciousness with an intersubjectively accessible phenomenon does not imply severing ‘any evidential link between empirical data and our pre-theoretical understanding’ of consciousness. Instead, empirical evidence from consciousness science helps us identify which functional properties are actually identical to the phenomena picked out by our phenomenal concepts.

Bridewell & Isaac ultimately hold that computational functionalism could be correct: “The apophatic scientist systematically chips away at the cognitive, intentional, and functional roles assigned to consciousness until all that remains is either an empty set or a circumscribed core that cannot be reduced to computational implementation. The former case would empirically vindicate methodological computationalism, and arguably change our intuitions about its reductive cousin, functionalism.” (p.8)

Consciousness might turn out to be identical with some computational processes after all. For Bridewell & Isaac, there’s one condition, though: it can’t be easy. Whatever consciousness turns out to be, you can’t grant it to a machine by writing a program alone in your room on a Wednesday afternoon. As they point out, some theories might sell consciousness for cheap and imply that simple systems could have it. At times, researchers have been tempted to double down on the embarrassment and insist that those simple systems are conscious (e.g., Aaronson, 2014). Here, I agree with Bridewell & Isaac that the apophatic path is often preferable.

Understood as a general methodological advice to avoid embarrassment, there’s a lot to like about apophatic science. In particular, it challenges “theorists to commit to increasingly specific hypotheses and [pushes] them away from dogmatic or vague assertions.” (p.4) This was also recently emphasized in a report on AI consciousness (Butlin et al. 2023). However, the apophatic path is a bumpy road. Let me finish by presenting three reasons why—they all have to do with the difficulty of deciding what our models should model.

What should models of consciousness model? Bridewell & Isaac answer: “consciousness-relevant phenomena.” One issue is that what counts as a consciousness-relevant phenomenon depends on theorizing about consciousness. Take inattentional blindness, for instance. Global workspace theorists say that it’s consciousness-relevant. Local recurrence theorists say that it isn’t. If a model of consciousness includes inattentional blindness, is that progress? There’s no easy answer. The same issue arises for many other phenomena.

Second, what counts as consciousness-relevant, as opposed to relevant for consciousness as we know it in humans? If by ‘consciousness-relevant’ one doesn’t mean essential for consciousness, then other conscious species will probably display different ‘consciousness-relevant’ phenomena. Consciousness might ‘facilitate’ a set of functions in one species and a different set of functions in another (Birch, 2022). If, on the other hand, it means essential for consciousness, then that presupposes a good handle on what consciousness is. So, unless one already knows what’s relevant for consciousness, it’s difficult to know what the models should include.

Finally, consciousness doesn’t operate in a vacuum. Whether or not a mechanism performs its function doesn’t only depend on the mechanism itself, but also on its relations to other mechanisms. For better or worse, functionalism is notoriously holist. My heart doesn’t perform its function if it sends blood outside of my body. In the same way, a wide variety of background conditions are presumably required to perform consciousness-functions. For example, it could be that consciousness can only be meaningfully implemented in agents, even if agency is not itself constitutive of consciousness. Or perhaps it could be that consciousness can only be implemented in organisms with certain cognitive capacities, again, not because those capacities are constitutive of consciousness, but because those capacities are background conditions for consciousness. Whether or not background conditions should be modeled, and which conditions should be modeled, is also a difficult question that cannot be settled independently of theory.

References

Aaronson, Scott (2014a). Giulio Tononi and Me: A Phi-Nal Exchange. https://www.scottaaronson.com/blog/?p=1823

Birch, J. (2022). The search for invertebrate consciousness. Nous, 56(1), 133–153.

Bridewell, W., & Isaac, A. M. C. (2021). Apophatic science: How computational modeling can explain consciousness. Neuroscience of Consciousness, 2021(1), 1–10. https://doi.org/10.1093/nc/niab010

Butlin, P., Long, R., Elmoznino, E., Bengio, Y., Birch, J., Constant, A., Deane, G., Fleming, S.M., Frith, C., Ji, X., Kanai, R., Klein, C., Lindsay, G., Michel, M., Mudrik, L., Peters, M.A.K.,  Schwitzgebel, E., Simon, J., VanRullen, R. (2023). Consciousness in Artificial Intelligence: Insights from the Science of Consciousness. ArXiv Preprint. https://arxiv.org/abs/2308.08708

Lau, H. (2022). In Consciousness We Trust: The Cognitive Neuroscience of Subjective Experience. Oxford University Press.

Lee, A. Y. (2022). Objective Phenomenology. Erkenntnis, 87(5).

Mashour, G. A., Roelfsema, P., Changeux, J., & Dehaene, S. (2020). Conscious Processing and the Global Neuronal Workspace Hypothesis. Neuron, 105(5), 776–798.

Back to Top