Computational Modeling and Consciousness 5: Response to Comments

By:

Will Bridewell, Naval Research Laboratory

Alistair M.C. Isaac, University of Edinburgh

(View all posts in this series here.)

We would like to thank Mona-Marie, Marta, and Matthias for their thoughtful (and thought-provoking!) commentaries.  There is too much for us to discuss fully in a single post, but we have chosen three key points for response in the hopes of better clarifying our position.

  1. Which consciousness-relevant phenomena?

The apophatic method depends on identifying phenomena relevant to consciousness, as these set the task for each stage of model development. Yet, Michel asks, how can an agnostic researcher such as our apophatic modeler choose which phenomena to consider when “what counts as a consciousness-relevant phenomenon depends on theorizing about consciousness”? While we think there are pre-theoretic, consciousness-relevant phenomena, we gladly grant Michel’s point that many more technical phenomena may only be recognized as consciousness-relevant from the standpoint of some specific theory.  So, how can modeling proceed without endorsing one of these?

This puzzle dissolves with the full polarity flip that is part of approaching a target apophatically. Since modeling success demonstrates what consciousness is not, any and all relevant phenomena are fodder for the pot. To take Michel’s example, the question of whether to include “inattentional blindness” on the list of phenomena for incorporation into a model is easy—if even one theory identifies the phenomenon as relevant, then it is fair game. Suppose one had a model that already implements all phenomena deemed relevant by local-recurrence theory, apophatically ruling it out as a theory of consciousness. Demanding this model also exhibit inattentional blindness does not rule out local-recurrence theory any less, but it may make progress toward ruling out global-workspace theory as well.

The apophatic perspective encourages one to be wildly pluralistic about relevant phenomena.

  • The future of functionalism

Both commentaries express skepticism about the end of the apophatic research program and what it would show.  In particular, our remark that successfully modeling all consciousness-relevant phenomena would “empirically vindicate methodological computationalism, and arguably change our intuitions about its reductive cousin, functionalism” has, we think, been somewhat overblown in both commentaries. Supposing this benchmark had indeed been achieved, then it would not show, as Michel implies, that “consciousness . . . is identical with some computational processes after all.” Not, at least, by the logic of apophatic model validation. Nor would it refute arguments against functionalism from the likes of Leibniz and Searle, as Wandrey and Halina rightly stress—the logic of these arguments is immune to empirical considerations.

No, all we intended to say was that such success might merely “change our intuitions.” In hindsight, “arguably” was possibly a poor choice of word; “conceivably” perhaps would have been better, as intuitions are not in general changed by argument. Indeed, to revisit the parallel with intelligence science we developed Tuesday, this is how we read the program proposed by Turing:[1] aim to reproduce computationally a hallmark of intelligence, namely conversation, and revisit the “meaningless” (because underspecified) question of whether this shows a machine can “think” once you have succeeded. (The curious reader may wish to revisit Turing’s “‘skin-of-an-onion’ analogy” (p.454) which elegantly articulates apophatic reasoning for intelligence science.) Historically, as AI systems have approached Turing’s benchmark, intuitions have changed, and many more would agree with the claim that computers can exhibit intelligence today than would have in 1950. Yet, the contrary intuitive response is also possible: for many (including ourselves), the success of GPT-4 at conversational tasks does not reveal its intelligence, but rather that conversation is not the sufficient hallmark of intelligence once thought.

Returning to speculation about the likely end of an apophatic study of consciousness: just to be clear, if we were betting types (and we are not) we would wager on the alternative outcome, that some consciousness-relevant phenomena resist all attempts at computational modeling, consciousness science faces a Kuhnian crisis, and a new paradigm emerges to replace computationalism.

  • Ethics and evidence

Wandrey and Halina close their commentary with an important reminder: value-neutral science is a myth, and consciousness science in particular comes laden with a variety of ethical pitfalls. We agree with these general points but wish to stand by our earlier claim that ethical considerations are “orthogonal” to the apophatic method. The crucial point to emphasize is that “apophatic science” as we describe it is a method for validation and iterative improvement of models. It concerns only the progress-establishing evidential relationship between a successful model and an apophatically characterized target such as consciousness or intelligence. As such, it is not a theory of scientific practice as a whole—it does not tell you whether an experiment is a good use of public funds or whether it is permissible to share a subject’s data or to treat them a certain way in the lab. So, while there will certainly be ethical quandaries for apophatic modelers, these quandaries neither arise from nor are resolved by the apophatic method itself.

We do not in any way wish to deny or diminish the importance of difficult decisions that need to be made on a daily basis about brain damaged patients or the treatment of animals. These decisions turn in part on claims about whether these patients are conscious and, more generally, on the character of their subjective experience. The apophatic method does not itself help to resolve these difficult debates. Indeed, it is our view that these should be treated as decisions under uncertainty, and that it would be wrong to resolve them by appealing only to one particular theory of consciousness, whatever that may be. Rather, questions about the treatment of patients or animals should be approached decision-theoretically, with full acknowledgement of the limits of our knowledge, and with publicly accessible assessment of the risks associated with each possible course of action. This is no different from decision making in other domains where the imperative to act on a short timescale outstrips the certainty with which science can provide us, such as the economy, climate, and health.[2]

We would like to conclude by acknowledging the above as gestures rather than full responses. Hopefully these remarks at least indicate the direction more comprehensive responses would take, and we look forward to future opportunities to carry the conversation forward at greater length and depth.


[1] Turing, Alan (1950) “Computing Machinery and Intelligence” Mind 49:433–460.

[2] Isaac, A. M. C. (2014) “Model Uncertainty and Policy Choice: A Plea for Integrated Subjectivism” Studies in History and Philosophy of Science 47:42–50.

One comment

  1. Grant Castillou

    It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

    What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Comments are closed.

Back to Top