By:
Will Bridewell, Naval Research Laboratory
Alistair M.C. Isaac, University of Edinburgh
(View all posts in this series here.)
What can computational models tell us about consciousness? Traditionally, the computational study of consciousness has been linked to metaphysical theories that reduce conscious states to their functional role. In a recent paper,[1] we argue that modeling should play a central role in consciousness science, and we introduce a method for validating models that does not presuppose functionalism, nor indeed any particular metaphysics of consciousness.
Models are simplified representations of a target system. They aim to capture the properties of the target that are important for some particular purpose. For instance, a three-dimensional model of a building may be used to show what it will look like when constructed, or it could act as a souvenir, reminding you of what it was like to see the Eiffel Tower in person. In science, computational models produce predictions of variables of interest and serve as explanations for past observations. Climate models used to predict the paths of hurricanes in the tropics include different features than those that account for the historical formation of Arctic sea-ice. In each of these cases the model abstracts away irrelevant details in a manner sensitive to its intended function.
What features should a computational model of consciousness include?
One way to answer this question might start from the problem of other minds, namely how do we know that anyone else is conscious? In practice, we observe them. If someone exhibits behavior that reflects our own conscious experience, we give them the benefit of the doubt. We may look for a variety of indicators. Does the person respond to objects or events in their environment? Do they seem to be in control of their actions? Do they express judgments about their internal qualitative states? Do they appear to deliberate, weighing alternatives and making choices? Do their occurrent states occasionally seem to conflict with underlying, “unconscious” drives? This is not an exhaustive or uncontroversial list, but it illustrates the wide variety of indicators of consciousness, none of which is sufficient by itself. A successful computational model of consciousness would presumably reproduce this diverse array of consciousness-relevant phenomena.
This thought, that studying consciousness will require us to integrate a large number of diverse considerations into a unified whole, positions computational models at the center of consciousness science. Since the 1960s, the terminology of computation has served as a lingua franca across the cognitive sciences, which suggests computational modeling as a sensible, if not essential, method to integrate results across disciplines. Consciousness-relevant phenomena that may be represented and simulated computationally include not only behavioral phenomena, such as those in the previous paragraph, but also architectural constraints, such as may come from the neuroscience of consciousness, or even sociolinguistic or anthropological patterns that we expect to indicate or correlate with consciousness.
But what would a computational model that integrates these diverse consciousness-relevant phenomena actually tell us? Ultimately, we would like to know what underlying or essential factors produce consciousness, yet the relevance of computational models for this task has been hotly debated.
On the one hand, computationalism, the idea that the constitutive properties of cognition are exhausted by those of information processing, has long been the default position in cognitive science. According to one reading of this view, the nature of a conscious state is simply identified with the functional role it plays. Adherents to this strong functionalist viewpoint might even go so far as to expect a successful computational model of conscious phenomena to itself be conscious.
On the other hand, this view has a long history of skeptical opposition. Consider the famous examples of Leibniz’s (1714) Mill Argument or Searle’s (1980) “Chinese Room”: both imagine a person inside a physical instantiation of an information-processing mechanism and demonstrate that nowhere within the mechanism may be found properties constitutive of consciousness, such as perceptual experience or intentionality. These and similar arguments show that our intuitive notions of consciousness are not exhausted by its information processing capacities.
We would like a methodological strategy for consciousness science that can take advantage of the integrative power of computational models, without endorsing an explicit stance in this metaphysical debate.
Our core insight is that skepticism about model adequacy can serve as a positive tool in a model-building research program. The method is roughly as follows. Some characteristic X is identified as essential to consciousness. X is then defined explicitly and encoded in a computational model. Looking at the model with the eyes of Leibniz, we take our very success at modeling X as evidence that X is insufficient for producing or explaining consciousness. The model is inadequate as it stands, but we do not abandon it. Rather, we then identify some further characteristic of consciousness, Y, integrate it with the original model (if possible), and again determine the model to be inadequate. The iteration of this procedure repeats until we find a consciousness-relevant phenomenon that can be expressed clearly enough to verify an implementation but that cannot be implemented computationally.
We call this an apophatic methodology because it aims to investigate what consciousness is by determining what it is not. We take this to be a genuinely scientific project for studying consciousness free from metaphysical commitments. First, it is genuinely targeted at consciousness, insofar as the key factor for determining how to expand the model is the relevance for consciousness of proposed additions. Second, it is a progressive research program, in the sense that it contains internal principles for iteratively improving the quality of its models in a manner sensitive to the data. Third, it employs skepticism only as an instrumental strategy for model validation; no metaphysical principle is necessary to ensure models are genuinely targeted at consciousness. In part 2, we argue that an analogous apophatic methodology has been the de facto guiding principle in the history of artificial intelligence.