5. The Positive Semantic Argument

As I emphasized on Wednesday, phenomenal concepts are, in a sense, private. They are acquaintance-based indexicals that aren’t governed by any set of public norms, and which don’t defer to the expertise of others. Nor do they make any commitment to the underlying nature of the states referred to. When attempting to project one of those concepts into the mind of another creature, then, one is asking whether the mind of that creature contains the same sort of introspected feel as this. If “same sort” here picked out a distinctive kind of property—a quale—then the truth-condition for the thought would be clear. But given that there are no qualia, the truth-condition becomes problematic. In effect, it concerns what would become of the dispositions underlying one’s own use of “this sort” if they were instantiated in the mind of the creature in question. But that will require us to suppose that the mind of that creature is quite other than it is.

Consider an analogy. Suppose that while visiting a neighborhood in a city to which one is relocating one exclaims, “This is the sort of neighborhood where we should live!” In the case I envisage, there is no single property of the neighborhood that underlies one’s use of “this sort”. There might be a confluence of factors that one has only partial awareness of, and which one couldn’t articulate. Notice that there is a sense in which one’s use of “this sort” here is “private”, not governed by any public norms, and not deferring to the usage of others. But now consider another neighborhood in the same city and ask, “Is that neighborhood, too, of this sort?” What fixes the truth conditions for a correct answer? I suggest this: that if someone with the exact same dispositions-to-judge that underlay one’s judgment about the initial neighborhood were to be exposed to the second, then they would judge that it, too, is of that sort.

Likewise, then, when we ask about the truth condition for the statement, “The mind of the monkey contains a state of this sort” (where “this sort” reflects use of a first-person phenomenal concept). The truth condition will be that if the dispositions underlying my use of the first-person concept were to be instantiated in the mind of the monkey, then they would issue in a judgment that there is a mental state of just that sort. But the antecedent of this counterfactual requires us to suppose that the mind of the monkey is quite other than it is. It requires us to suppose a mind that is capable of explicit, reflective, higher-order thinking about its own nonconceptual states. But our original question, of course, was about the mind of the monkey as it actually is. And with respect to that, the proffered counterfactual is unevaluable. To evaluate it, we have to go to a different world, a world in which monkey minds are much more similar to human ones than they actually are.

So there is no fact of the matter about states like this (phenomenal consciousness) in animals. (Nor in human infants, come to that.) But the fact that there is no fact of the matter doesn’t matter. For there is no real property to be inquired about, beyond nonconceptual content that gives rise to zombie thought experiments and such-like in humans. There are many facts to be learned about animal minds (and infant minds) and the extent of their resemblances to and differences from the adult human mind. Once we know all that, there is nothing more to know.

15 Comments

  1. I think I’m with you ontologically. There is no fact of the matter on whether another species has a mental state “like mine.” And I appreciate your clarifications for infants and brain injured patients. Although many will see it as a bullet bitten.

    But I do think it’s worth noting why something ever fails to be a fact of the matter: ambiguity. In this case, ambiguity about what we mean by terms like “phenomenal consciousness” or “like this”. “Like” to what degree? And which “this” exactly?

    As soon as we try to nail down a specific definition, we run into trouble, because no specific definition is widely accepted. The vague terminology masks differences. It’s like lawmakers who can’t agree on precisely what a law should say, so instead they write something vague they can agree on, and leave it to the courts to hash out.

    And yet, I think we can still recognize that the way various species process information will be like the way we do, to varying degrees. As you note, there is no magical line. But great apes are a lot closer than dogs, which are closer than mice, which in turn are closer than frogs, fish, etc.

    And there’s something to be said for focusing on systems that do globally broadcast nonconceptual (irreducible) sensory or affective content. That criteria does seem to rule out plants and electrons.

    • Peter Carruthers

      I don’t think it is a matter of ambiguity, nor vagueness, in our phenomenal concepts. It is rather that when one uses such a concept one makes no commitments about underlying structures or functional roles. So a sort of indeterminacy, perhaps.
      And yes, of course one can, and should, study the nature and forms of mentality across species. Part of this will involve investigating working memory across species, global broadcasting across species, and so on. But there is no reason (once the consciousness question is laid to rest) why we should focus especially on understanding these aspects of the cognitive systems of other species, as opposed to – for example – dorsal sensorimotor networks in those creatures (whose outputs are unconscious in ourselves).

  2. I may be missing the point completely again, but doesn’t your example of the monkey (or human infant) really boil down to the lack of a *concept* of phenomenal consciousness rather than lack of phenomenal consciousness itself?

    • Peter Carruthers

      Alan, the concept of phenomenal consciousness is first-personal. Phenomenal consciousness is what gets thought about when I entertain thoughts like, “When I die, all *this* will be gone.” Hence when raising the question whether another being has states like *this*, the truth-condition is fixed by the counterfactual, “If the dispositions-to-judge underlying my use of the concept *this* were instantiated in the mind of that other, then they would issue in a judgment that the mind in question contains states of *this*.” In cases where the other being in question is a normal fairly mature human, this counterfactual comes out true; and so those other beings do, indeed, have phenomenally conscious states (= states like *this*). But in cases where the mind in question is very different from my own — and in particular, is incapable of formulating indexical thoughts about its own perceptual states — the counterfactual is unevaluable. This is because projecting my own dispositions-to-judge into that other mind presupposes that the mind in question is other than it is (i.e., that it has the structure of the human mind).

  3. Oliver S.

    For example, on the basis of my experiential acquaintance with many phenomenal colors, I acquired and now possess a general concept of phenomenal color (color-as-experienced), and also general concepts of experiencing (being able to experience) phenomenal color and of experiencing (being able to experience) something which belongs to the same kind of experience (i.e. visual experience) as my experiencing of phenomenal color. I fail to see how there can be no fact of the matter as to whether a given nonhuman animal has these properties (abilities) or not.

    (By the way, having just ordered a copy, I’m going to read your new book.)

  4. Didion Moves

    First impression: It seems like a fairly practical approach that is a little speculative w.r.t. the science, and less speculative w.r.t. metaphysics.

    First-pass tensions?
    (1) Facts of animal conscious minds are questioned; justification in part is that a ‘degree of similarity’ criteria would be arbitrary [in Pt 3-5]
    (2) Facts of human animal minds are not questioned, implicit justification in part (that they satisfy some ‘degree of similarity’ with a given person’s first-person case).
    (3) No qualia
    (4) Phenomenal consciousness is Ineffable and Private, putting us in (immediate?) contact with ‘contents’ (intrinsic properties?)

    Tension (1)-(2): Supposing we are well motivated to do away with facts of animal p-consciousness, among the animals this applies to are monkeys and apes (including the bipedal one’s flapping their meat holes all the time!), since whatever *this* is that I’m pointing to with my what-it’s-like talk will not be realized by identical events, and some (arbitrary?!) degree of similarity will need to be stipulated

    Tension (3)-(4): From the 3i-p model (intrinsic, ineffable, immediate and private) you preserve ineffable and private. A qualephile will pick up on this, and note that the other two (intrinsic, immediacy) are left open in the unanalysed appeal to ‘contents’, such that your account remains consistent with myriad of priors w.r.t. the nature of (or lack thereof) mental contents

    Confession (1)-(4): I worry about/ get frustrated by the narrowness of focus in otherwise excellent phil mind lit, since it sometimes seems like kicking the can around the roundabout. In light of such worries, does the book get into the nature of content?[cognition? artificial systems?…] Or maybe it can be paired up nicely with another work?

  5. Peter Carruthers

    Let me just pick up on your final remark. I agree entirely! I think the whole phenomenal-consciousness issue has sidetracked a lot of philosophy of mind, and some cognitive science, in unproductive ways. The main moral of the book is that we should set it aside and move on to more important matters. One of these would be the issue of content — what it is, where it is needed for explanation. Here I can recommend Nick Shea’s recent book *Representation in cognitive science*. Or there are issues of more general cognitive architecture, where you might be interested in my 2015 book, *The centered mind*.

  6. Oliver S.

    “‘Content’ is a useful shorthand for the objects, properties and conditions that a representation refers to or is about.”

    (Shea, Nicholas. Representation in Cognitive Science. Oxford: Oxford University Press, 2018. p. 6)

    One problem is the ambiguity of “content”: The representational content of a mental state can be the state-internal representing thing (vehicle or “material” of representation), and it can be the state-external represented thing (state of affairs). Shea uses “content” in the latter sense, whereas I prefer to use it in the former sense, and to call the represented thing (state of affairs) the representational (intentional) *object* (or *objective*, to use Meinong’s term for intentional states of affairs).

    Moreover, if the level of sign-meaning (sign-sense) is sandwiched between the level of the sign-vehicle and the level of the sign-referent, we get a third possible meaning of “representational content”.
    For example, the representational content of my (consciously) thinking that London is north of Rome can be a mental token of the sentence “London is north of Rome”; it can be the meaning (sense) of this sentence, i.e. the proposition ; and it can be the state of affairs of London’s being north of Rome or the (nonpropositional) fact that London is north of Rome.

  7. James of Seattle

    Peter, first, thank you for an excellent series of essays. You have provided me with a resource for explaining many of these ideas.

    Second, I am trying to get a handle on your understanding of the role of the global workspace with respect to experience. Is a creation of a representation of a content in the workspace sufficient for an experience? Dennett suggests not, saying you should explain “and then what happens”. I agree, because if nothing responds to that representation in the workspace, which means no memory of it was generated, it doesn’t seem like there was an experience. The richness or sparseness of the experience can be explained as the number and specifics of the responses to any one representation.

    Another question is how big must the audience to the global workspace be? I would expect normal humans have the largest audience (especially those generating higher order thoughts), and animals would have smaller audiences (those generating hormonal responses, action plans, memories), but what about the degenerate case of an audience of exactly one? Is that enough to flip the switch of consciousness to on?

    Finally, you mention the “global ignition” associated with a representation achieving the “global broadcast”. This ignition may be in doubt given this recent paper (https://www.biorxiv.org/content/10.1101/2020.01.15.908400v1) suggesting that “ignition” may be more associated with reporting an experience than simply having it.

    • Peter Carruthers

      James, to your “second”: yes, I agree that the question of what happens next, once a content has made it into the global workspace, is an important one. There are all sorts of important scientific questions that open up here. But I think it is a mistake to believe that a content could make it to the workspace without having any effects. Neural signaling doesn’t work like that. It isn’t like posting a letter to someone, where the letter might sit unopened in their letter-box. Any globally broadcast neural signal will inevitably have effects on the downstream consumer systems, even if they are quickly suppressed by other factors, and so aren’t recorded in memory or reportable. I have no problem with their being unremembered unreported conscious contents. But belief in them will have to be theory-driven, of course.
      To your “another question”: how big for what? For phenomenal consciousness? Here I maintain that once we get away from the mature-human set of consumer systems, there is no fact of the matter. For the very idea of phenomenal consciousness is essentially first-personal, and when we attempt to project our phenomenal concepts into minds significantly unlike our own, we have to rely on counterfactuals that are unevaluable. (See the post above.)
      Finally, your “finally”: yes, I know that there are a number of findings suggesting that the EEG PB3 is a signature of response preparation rather than consciousness. But that doesn’t count directly against global-workspace theory. All it shows that that something people thought was a signature of consciousness isn’t really. I think there is lots of other evidence that contents have to make it to PFC to be conscious. One paper I especially like is Panagiotaropoulos et al. 2012, in Neuron. They show using single-cell recording in monkeys [so note, in this context, strictly only finding evidence of a form of access-consciousness, but enabling us to generalize to access-consciousness more generally], using binocular rivalry that face-selective cells in lateral PFC are active when, and only when, the face image dominates. And this is in purely passive viewing conditions, with no responses prepared or made.

  8. “So there is no fact of the matter about states like this (phenomenal consciousness) in animals.”

    If there is no fact about whether an animal has phenomenal consciousness or a particular state, we can’t assume they do have phenomenal consciousness or do not have it, whether if they have it is like a human one or not like a human one. But why limit this to animals? Plants could have phenomenal consciousness, perhaps vastly different from a human one. How about a lake or a mountain? We have no fact that can tell whether they do or do not have phenomenal consciousness. Sure plants, mountains, and lakes do not seem to have the equipment for phenomenal consciousness but that would be a fact. And there is no fact, you say.

  9. Oliver S.

    @Peter Carruthers:

    “[A] creature is transitively conscious of an object or event when it /perceives/ that object or event.”

    (Carruthers, Peter. Human and Animal Minds: The Consciousness Questions Laid to Rest. New York: Oxford University Press, 2019. p. 5)

    Is perception the only form of transitive creature consciousness? What about imagination and cogitation? Isn’t imagining or thinking about something (something existent/real at least) a way of being conscious of it? For example, when I imagine my mother’s face, am I not conscious of her?

    “…/creature/ consciousness, which can be either /transitive/ or /intransitive/.”

    (Carruthers, Peter. Human and Animal Minds: The Consciousness Questions Laid to Rest. New York: Oxford University Press, 2019. p. 4)

    What about a distinction between transitive state consciousness defined as mental states with aboutness (intentionality or reference) and intransitive state consciousness defined as mental states lacking aboutness?

    • Peter Carruthers

      Oliver, I don’t really care whether one extends the language of “conscious of” to cases of imagination or thought. Fine with me either way, I think. But I don’t think there are any conscious mental states lacking aboutness. As you will see later in the book, I endorse representationalism about the mental in general, including states like pains and moods.

Comments are closed.

Back to Top