4. The Negative Semantic Argument

It is important to realize that first-personal phenomenal consciousness is all-or-nothing. Any given mental state is either phenomenally conscious or it isn’t. It makes no sense to talk of degrees of phenomenal consciousness, or partial phenomenal consciousness. This is another place where some of the distinctions drawn in Monday’s post are important. Creature consciousness admits of degrees, of course. One can be only partly awake, or just barely conscious as one lies on the hospital trolley while an anesthetic takes effect. But it is definitely, determinately, like something to be only partially awake. And likewise, perceptual awareness admits of degrees. One can sometimes barely make out an indeterminate shape in the twilight mist; or as one slides under the anesthetic, one can be conscious only of an indeterminate mumble of voices. But is, nevertheless, definitely like something to see an indeterminate shape in the mist, or to hear a faint and seemingly-distant voice.

Now we appear to have a problem, provided we accept that phenomenal consciousness is globally-broadcast nonconceptual content. There is no problem arising in the human case, because global broadcasting, too, is all-or-nothing in humans. Indeed, there is a nice synergy with the neuroscientific findings here. It seems that neural activity either hits a threshold for “global ignition”, or it doesn’t; there is a sharp step-function underlying global broadcasting in the brain. When we look across species, on the other hand, the picture changes dramatically. For the systems broadcast to in creatures that have minds organized around something resembling a central workspace are more or less similar to the executive systems that comprise the consumers of globally broadcast contents in humans. Worse still, as we look across species one will see a complex cross-cutting multi-dimensional similarity matrix, with some animal minds resembling our own in some respects, and others in others. But it makes no sense to say that the mental states of an animal are to some degree phenomenally conscious.

Notice that we cannot resolve the problem by identifying phenomenal consciousness with nonconceptual content no matter where it is found. For there are nonconceptual contents in backward masking experiments or binocular rivalry experiments, as well was nonconceptual contents in the dorsal sensorimotor system, that form the very paradigms of unconscious perception. In fact, the states picked out by our first-person phenomenal concepts cannot be separated from the intricate causal roles they serve. These include availability to systems for reporting, for reflective thinking, and for higher-order thought. Some of these are present in at least nascent form in other creatures, others not at all.

The right thing to say is that there is no fact of the matter about phenomenal consciousness in animals. The negative semantic argument for this conclusion is this: There is nothing in our use of the first-person phenomenal concepts that define the topic of phenomenal consciousness that fixes what should be said about creatures whose nonconceptual states resemble our own more-or-less closely. One could, of course, stipulate a boundary. One could specify some degree of resemblance to the human global broadcasting network as being sufficient for phenomenal consciousness. But this would be a stipulation, not a discovery. For to repeat: there is nothing in the first-person deployment of phenomenal concepts that fixes where the boundary should be.

The important point to realize, however, is that this doesn’t matter in the slightest. Given the fully reductive nature of global-workspace theory, there is no extra property that enters the world with phenomenal consciousness. There is just nonconceptual content that is made available to executive systems which, in humans, enable first-person thoughts and zombie thought experiments. There are no qualia. Nothing magical happens. Nothing lights up. Put differently, suppose we knew everything there is to know about the mind of a monkey, or a salmon, or a honey bee of a functional, representational, and neural sort. And suppose we had completely charted the exact similarities and differences between their minds and our own. Then we would know everything that there is to know about their minds, period. There is no further property to inquire about. For there are no qualia. There is, however, a further question one can ask. One can ask, while deploying a phenomenal concept for one of one’s own mental states, “Does the mind of the monkey contain a state like this?” That leads us to what I call the positive semantic argument for there being no fact of the matter, which I will outline tomorrow.

15 Comments

  1. Oliver S.

    1. “There are no qualia.”
    2. “There is just nonconceptual content.”
    3. “[T]here are nonconceptual contents…that form the very paradigms of *un*conscious perception.”

    So you acknowledge that there is a real difference between (phenomenally) conscious, experiential nonconceptual contents and (phenomenally) nonconscious, nonexperiential ones. But what’s the difference between (phenomenally) conscious, experiential nonconceptual contents and qualia? Of course, the answer depends on the definition of “qualia”; but if they are defined (as neutrally as possible) as “the subjective qualitative contents of phenomenally conscious/experiential states”, then—in the context of GWT—you should be a (reductive) realist rather than an antirealist about qualia.

    • Peter Carruthers

      Oliver, yes, you are correct — at least, provided that “subjective” in your definition means “subjectively available”, and isn’t taken to mean something like “intrinsically first-personal property”; for I deny that there are any such properties. Another neutral way to define “qualia” would be: whatever property gives rise to the “hard”-problem thought-experiments. Since that property is actually globally broadcast nonconceptual content, qualia, in that sense, are real. But I think the term is more usually reserved to designate the putative intrinsically-subjective and/or nonphysical properties that the thought-experiments are believed to support. It is in this latter sense that I am a qualia irrealist.

      • Oliver S.

        As for qualia realism, if qualia antireductionism and qualia antimaterialism are included in it *by definition*, then I’m a qualia antirealist or irrealist too. But qualia realism as such should be nothing more than the doctrine that there (really) are such things as *qualia* rather than the doctrine that there (really) are such things as *immaterial and irreducible qualia*—and in that simple sense I am a qualia realist. Whether qualia are material or immaterial, reducible or irreducible, simple or complex, effable or ineffable are questions that shouldn’t be answered by their definition, since these are substantive questions about mental reality.

        In your new book you write (p. 10) that “qualia irrealism is a close relative of what Frankish (2016) calls ‘illusionism’ about consciousness.” Illusionism is a euphemism for eliminativism about phenomenal consciousness (so the label “hallucinationism” would be more appropriate).

        “Are illusionists claiming that we are (phenomenal) zombies? If the only thing zombies lack is phenomenal consciousness properly so called, then illusionists must say that, in this technical sense, we are zombies.”

        (Frankish, Keith. “Illusionism as a Theory of Consciousness.” In Illusionism as a Theory of Consciousness, edited by Keith Frankish, 11-39. Exeter: Imprint Academic, 2017. p. 22)

        But by equating phenomenal consciousness with something that (you think) exists/is real—the nonconceptual content of the global workspace—, you are not an eliminativist but a (reductive) realist about it. However, the question is whether you can consistently eat your cake and have it: Can qualia-free nonconceptual global-workspace content still properly be called phenomenal consciousness, or are you guilty of what Galen Strawson calls semantic “looking-glassing”?

        “To looking-glass or reversify a term is to define it in such a way that whatever one means by it, it excludes what the term means.”

        (Strawson, Galen. “Fundamental Singleness: How to Turn the 2nd Paralogism into a Valid Argument.” 2010. Reprinted in The Subject of Experience, 165-187. Oxford: Oxford University Press, 2017. p. 167n6)

        “I myself am hesitant to use the word ‘qualia’ and its singular, ‘quale’, because they give the impression that there are two separate phenomena, consciousness and qualia. But of course, all conscious phenomena are qualitative, subjective experiences, and hence are qualia. There are not two types of phenomena, consciousness and qualia. There is just consciousness, which is a series of qualitative states.”

        “[T]he problem of qualia is not just an aspect of the problem of consciousness; it *is* the problem of consciousness. You can talk about various other features of consciousness—for example, the powers that the visual system has to discriminate colors—but to the extent that you are talking about conscious discrimination you are talking about qualia. I think that the term ‘qualia’ is misleading because it suggests that the quale of a state of consciousness might be carved off from the rest of the consciousness and set on one side, as if you could talk about the rest of the problem of consciousness while ignoring the subjective, qualitative feel of consciousness. But you can’t set qualia on one side, because if you do there is no consciousness left over.”

        (Searle, John R. The Mystery of Consciousness. New York: New York Review of Books, 1997. pp. 9-10+29)

  2. Peter,
    If another species has the all-or-nothing broadcast dynamic, and if the sensory or affective content is irreducible, then it seems like a case could be made that the content is phenomenal for that species. Your point seems to be that the species may not have all the audience processes humans have, or its equivalent processes may have less functionality, meaning that it wouldn’t appreciate the content in the full manner a human would.

    On the one hand, I can see that argument. But we can balance it by considering impaired or injured humans. Consider someone who has lesions in Wernicke’s area, making them unable to understand or communicate with language. Are they still phenomenally conscious? Or someone with prefrontal lesions that make then unable to introspect. Or a patient with damage to the anterior cingulate, giving them akinetic mutism. These cases seem to present similar dilemmas to non-human animals, yet if the human shows any ability to perceive their environment, we tend to regard them as conscious.

    Personally, I agree that there isn’t really a fact of the matter if the question is, “Is X conscious?” It’s a bit like asking, “Is X intelligent?” But if the question is, “What kind of consciousness does X have?”, then a number of intelligent answers seem possible, from “none”, to discussions about sensory consciousness, affect consciousness, imaginative consciousness, introspective consciousness, etc, with each of these having a phenomenal side.

  3. Peter Carruthers

    SelfAwarePatterns, you conclude the central paragraph about adult humans with brain damage of various sorts by saying, “we tend to regard them as conscious.” What we tend to think, or intuit to be the case, has no bearing on the debate, I think. For recall that phenomenal consciousness is an essentially first-personal notion, defined via the phenomenal concepts that each of us can deploy in our own case. Absent a successful theory of what it is that those concepts pick out, I think intuitions about the extension of those concepts across creatures lack any warrant. (And after all, intuitions differ a great deal here: some think that only humans are phenomenally conscious; some that all mammals are; some that invertebrates are too; and — heaven help us — some even intuit that so do plants or even sub-atomic particles.)
    I actually think that there is no fact of the matter about phenomenal consciousness in human infants / young children and some brain-damaged adults. This is because the best theory that we have only applies to them in degrees; but phenomenal consciousness can’t admit of degrees. But since I deny that there is anything to phenomenal consciousness over and above globally broadcast nonconceptual content (which can be targeted by phenomenal concepts and hence give rise to “hard”-problem thought experiments in adult humans), I think this doesn’t matter in the slightest. What matters is figuring out what infants can think and perceive, and more generally, how their minds work. Once we know all that, there is nothing more to know. See my post #5 for further development of this thought.

  4. Eric Thomson

    Thanks for these posts. The phenomenal concepts strategy (PCS) seems promising on first pass just can’t quite get on board with it. Lord knows I have tried!

    My main concern comes up in this post, when you end up at, ‘Then we would know everything that there is to know about their minds, period.’ The problem is that experience is an important aspect of consciousness. If your theory leaves it out, it is incomplete. An alien scientist with no notion of subjective experience that tells us the neuro/physical/informational details about someone (including GW), but nothing about subjective experience, is telling an incomplete story. There is a rich suite of phenomenological facts that are simply invisible to them. Now this doesn’t imply that physicalism is false (according to PCS it is a mere epistemic/conceptual not ontological limitation). But it does imply that scientism is false, and that the alien *doesn’t* know everything about their minds (there there is experiencing going on there). So we need a pluralist story about concepts/epistemic access free of scientism. That is, phenomenology is legit, real, and gives access to a conceptual/epistemic landscape literally unavailable any other way. This seems to follow from any reasonable PCS.

    What is unsatisfying is it seems we erect this infrastructure that applies to only one thing in biology: consciousness. At some point it starts to feel like epicyclic special pleading. At what point is it more natural to just say consciousness is just different? I’m not sure frankly.

    A second (and less central) concern is with the conclusion that there is no fact of the matter about animals’ experiences. That seems a non sequitur, even if you buy into PCS. For argument’s sake assume that in fact cats are conscious (i.e., that is a fact of the matter), and also because of PCS, we cannot know the nature of said experiences. It seems the right move is to expand your epistemic framework to accommodate these two facts. Not to conclude there is no fact of the matter about cat consciousness. The only way I can see this going through is if you are tacitly pushing verificationism, which is orthogonal to PCS (and has serious problems).

    • Peter Carruthers

      Eric, there are two things can be meant by “the theory leaves out subjective experience.” It can mean, “knowledge of the theory doesn’t put you in position to *have* subjective experience.” This is obviously true. Just knowing about globally broadcast nonconceptual content doesn’t cause you to undergo globally broadcast nonconceptual contents. (Note the echoes here of Jackson’s Mary.) But this is unproblematic. Theories aren’t supposed to cause experiences. They just tell you what they are. Alternatively, your claim can mean, “knowledge of the theory doesn’t explain why I should undergo *this*”, where you deploy a phenomenal concept for some experience you are undergoing. But that just brings us round to the phenomenal concept strategy again. I explain to you what enables you to think such thoughts using third-person terms, and why, given that capacity, those thoughts should fail to be entailed by a complete physical / functional / representational description of you. You should then (if you are rational — I wish I could insert a smiley-face here) be satisfied.

      • Eric Thomson

        Peter: I agree, partly. I don’t expect a theory to evoke subjective experiences any more than knowledge of photosynthesis should cause one to photosynthesize.

        But I would expect a theory to capture the most important aspects of conscious experiences. If subjective experience is just neuro/functional state X, then they will be able to discover this. But it should also tell us that there is subjective experience present in a system instantiating X. By hypothesis, it doesn’t. This is the crux of the problem, the basis of my claims about incompleteness.

        I’m not asking for science to tell us the subjective character of red, but to tell us that there is subjective experience of *some* sort. The hurdle I take to be reasonable: I would expect a complete theory to leave the cat-studying aliens (mentioned above) in a position analogous to Mary before she gains color vision. I.e., she knows she lacks color experiences, but at least she knows that there are experiences of color.

        The PCS story seems to accept the claim that it is not possible to jump this hurdle. This gives too much ground to dualism, but what alternative do I have as a physicalist?

        One, the aliens could recreate state X in themselves (using a prosthetic system and optogenetics or whatever), and come to learn about experience that way. But that would be cheating. They would be giving themselves phenomenology on the sly.

        But what if we tweak that thought experiment: lightning hit them and they suddenly had concepts like they already had the experiences, but had never actually had them (e.g., a variant on Swamp Mary or whatever)? I know this is outlandish, but it suggests that they could at least have the concept without ever having the experience, and maybe we are just not currently smart enough to see how they could do this naturally, without lightning or surgical procedures. Maybe they could reach that point by just being really, really good scientists. So one of the key premises that makes PCS necessary would be undercut.

        Consciousness science is young, so maybe we don’t need to have a knockout answer right yet and are tending to get stuck in local minima. I see PCS as a really tempting, but ultimately unsatisfying, local minimum. (This isn’t to say that phenomenal concepts don’t exist: I just don’t think they should be expected to do quite so much heavy metaphysical lifting).

        • Peter Carruthers

          Eric, let me just address your main point: “But it should also tell us that there is subjective experience present in a system instantiating X. By hypothesis, it doesn’t.” By whose hypothesis? Not mine. That is the point of deploying the zombie-zombie argument as part of the phenomenal concept strategy. A creature that is fully functionally and representationally identical to myself, and can deploy first-person indexical concepts for its globally available nonconceptual states, will be subject to explanatory gap and zombie thought experiments, just as I am. One can see in advance, from a purely third-personal perspective, that he will be able to conceive of a zombie version of himself, thinking, “But there could be a creature exactly like me except that it lacks *this* [feel of a percept of red].” This is sufficient reason to think that the zombie has whatever I have, and that there is no property in myself remaining to be explained.
          Of course, I can still go on to think, “But maybe when the zombie imagines a zombie version of himself he isn’t imagining the absence of *this*.” But the phenomenal concept strategy can also explain why this thought is still thinkable.
          And of course, too, in my view the theory doesn’t tell us whether a cat has states like *this*. On the contrary, it claims that there is no fact of the matter. But this is only a negative strike against the theory if you beg the question and assume that subjective experience is some sort of intrinsically-subjective property over-and-above those that can be third-personally characterized — that is, only if you assume some sort of qualia realism.

          • Eric Thomson

            Peter thanks for the response. I was arguing that some alien species of super-scientists with no theory of consciousness (and no phenomenal concepts) should be able to tell us that a species has subjective experience. I take this as a kind of minimal hurdle for a robust materialist theory.

            >>> And of course, too, in my view the theory doesn’t tell us whether a cat has states like *this*. On the contrary, it claims that there is no fact of the matter.

            I don’t expect a good theory to reveal the specific qualitative character of the cat’s experience, but just to tell us there is (some) experience there, that we will not have unless we are in the right neuro/functional state. Stating there is no fact of the matter seems a mistake for reasons I mentioned above (and seems orthogonal to PCS).

            >>> But this is only a negative strike against the theory if you beg the question and assume that subjective experience is some sort of intrinsically-subjective property over-and-above those that can be third-personally characterized — that is, only if you assume some sort of qualia realism.

            I’d suggest it is a negative strike if you think subjective experiences are real. I think they are real, so a theory that cannot tell us how they are distributed in the universe (including to nonhumans) is an incomplete theory.

            Stepping back, though, the core problem I have with PCS is It seems to simply accept the explanatory gap, and then tries to explain why it isn’t a problem. I’m pushing to take more seriously that accepting the gap is in the same vein as accepting the explanatory gap between life and chemistry pushed by vitalists. It just seems premature, and part of a larger-scale local minimum that has ensnared many consciousness researchers (not just philosophers).

          • Eric Thomson

            I wrote:
            “I was arguing that some alien species of super-scientists with no theory of consciousness (and no phenomenal concepts) should be able to tell us that a species has subjective experience.”

            This was a bit unclear. I just meant that if they *start out* without a theory of consciousness, that after a period of intense study of some species (e.g., a cat) they should be able to tell us that the species has subjective experiences.

  5. I don’t understand why you say “It makes no sense to talk of degrees of phenomenal consciousness”. Surely, with or without clumping into concepts, my phenomenal consciousness can be measured in terms of the number of bits of information that I am aware of and I am not at all sure that the bit corresponding to the concept “I feel conscious” becomes switched on or off either instantly or with certainty at any particular time.

    Admittedly the rate of change of how much information is contained in what it’s like to be whatever we are is fairly rapid relative to our own subjective measure of time. But for me this does not mean that “first-personal phenomenal consciousness is all-or-nothing”.

    When we turn on a light switch it *seems* to have an instant effect but of course it really does not. In fact, no physical (or neurological) phenomenon is a “sharp step function” and the idea of such a sharpness for subjective mental phenomena is confounded also by its dependence on the time scale of those phenomena themselves relative to an external observer who in principle may operate on a scale faster than even a fully conscious human observer.

    You seem to be suggesting that there has been an observed difference between the time scales of “global ignition” in humans compared to other species. Would you be willing to provide references on that?

    Also can you explain to me how if (in “use of the first-person phenomenal concepts that define the topic of phenomenal consciousness”) the use of “first-person” is taken seriously, the fact that I could specify some degree of resemblance to my global broadcasting network as being sufficient for phenomenal consciousness would only be a stipulation, not a discovery, when applied to you. (Or vice versa of course)

    • Peter Carruthers

      Alan, there is a lot here, and I can’t address it all. But to your main question asked at the outset: of course I agree that the contents of phenomenally conscious experience admit of degrees of richness and sparseness. You can be conscious of more or less. But note that this is degrees of transitive creature consciousness, or perceptual awareness. But even the faintest, least determinate, least rich experience (e.g. a vague mumble of voices in the apparent distance as you slip under the anesthetic) is either determinately, categorically, *like something* to undergo, or not.
      And yes, the “step function” for global broadcasting is only sharp relative to human powers of discrimination, but it does seem to be pretty sharp. And no, of course I’m not claiming anything about the timescales involved in global broadcasting across species.

  6. jeffrey g kessen

    I do wish that philosophers would dig more deeply into the nature of, “what it’s like” to be conscious. Why isn’t there something circular in defining a conscious state relative to other conscious states?

Comments are closed.

Back to Top