Is Consciousness a Spandrel?

Next week I’m presenting a Summer School on Evolution and the Function of Consciousness organized by Stevan Harnad in Montreal. The paper is co-authored with Zack Robinson and Corey Maley. Comments welcome, especially if they come before the summer institute.

Abstract: Assigning a biological function to phenomenal consciousness appears to be needed to explain its origin through evolution. For evolution by natural selection operates on organisms’ traits based on the biological functions they fulfill. And yet identifying the function(s) of phenomenal consciousness has proven difficult. Some have proposed that the function of phenomenal consciousness is facilitating mental processes such as reasoning or learning. But mental processes such as reasoning and learning seem to be possible in the absence of phenomenal consciousness. It is difficult to pinpoint in what way phenomenal consciousness enhances such processes. In this paper, we explore a possibility that has been neglected to date. Perhaps phenomenal consciousness has no function of its own because it is either a byproduct of other traits or a (functionless) accident. If so, then phenomenal consciousness has an evolutionary explanation even though it fulfills no biological function.

18 Comments

  1. Hi Gualtiero.

    You write: “Some have proposed [1] that the function of phenomenal consciousness is facilitating mental processes such as reasoning or learning. But [2] mental processes such as reasoning and learning seem to be possible in the absence of phenomenal consciousness.”

    Similarly, the circulation of blood seems possible in the absence of a heart. (We can conceive of alternative mechanisms.) This doesn’t mean we shouldn’t explain the heart as selected to help circulate the blood. So why take [2] as evidence against [1]?

    Perhaps you would say that “it is difficult to pinpoint in what way phenomenal consciousness enhances such processes.” But why do we have to pinpoint the way that phenomenal consciousness enhances such processes before we can say that it likely does enhance them? The reasoning and learning that occur in the presence of phenomenal consciousness is very different from what occurs in its absence. Further, I often reason and learn *via* phenomenally conscious thought, so it seems that phenomenal consciousness enhances reasoning and learning *in some way*, even if we have trouble pinpointing that way. Compare: We can’t pinpoint the way in which SSRIs alleviate depression. (Activity on serotonin receptors alone is insufficient.) But we are confident that SSRIs do (often enough) alleviate depression.

    I’d be interested to hear your thoughts.

    Thanks!

  2. …but feel free to just tell me to read the paper if it addresses these questions — I’ve realized that perhaps I was jumping the gun by posting questions after only reading the abstract. (Sorry if that’s the case!)

  3. Hi Ben,

    I’m obviously not Gualtiero, but reading the paper I think this might be relevant to your comments.

    Gualtiero (and co-authors) argue that we can imagine an artificial intelligence that can perform all mental functions that a human being can without having phenomenal consciousness. They argue that this poses a problem for the adaptive-value account, because then phenomenal consciousness isn’t a functional improvement in systems with human-level mentality.

    “In other words, the adaptive-value theorist must argue that consciousness is a functional improvement in systems with human-level mentality. But it seems that all relevant mental capabilities could be possessed without having phenomenal consciousness. It is unclear why developing phenomenal consciousness should improve a system’s functionality.” (3)

    They do make a brief mention that it’s possibly a unique fact about actual biological creatures on Earth that phenomenal consciousness is necessary for those functions (page 4), which could potentially save adaptive-value accounts. Moreover, they also suggest that exhaustively analyzing adaptive value arguments and counter arguments is simply beyond the scope of their paper, as they are primarily concerned with whether it could either be (1) maladaptive or (2) a spandrel or evolutionary accident.

  4. Hi Matt. Thanks for responding.

    I do think my objections remain, though. The fact that “we can imagine an artificial intelligence [with human-level mental capacities yet without] phenomenal consciousness” just doesn’t seem relevant. No more relevant than the fact that we can imagine an organism — actually, we know of *actual* organisms — that circulate bodily fluids without a heart. The human heart was still selected to circulate blood.

    This isn’t to say that “it’s possibly a unique fact about actual biological creatures on Earth that phenomenal consciousness is necessary for those functions.” It’s simply to say that phenomenal consciousness seems to be (part of) one mechanism for implementing those functions. A heart isn’t necessary to circulate blood, but that’s still its function.

    I don’t think any of this counts as an “exhaustive analysis” of adaptive value arguments and counterarguments. These are quite general considerations that suggest there is little motivation to explore how or why phenomenal consciousness might be maladaptive or a spandrel.

  5. Corey

    Hi Ben. Yes, it’s possible to imagine creatures without hearts that circulate blood. But in creatures like us, if we were to remove the heart, blood would not circulate. We hypothesize that, like philosophical zombies, creatures like us would not suffer any loss of function were phenomenal consciousness (PC) to be “removed.” Of course, this is an empirical question: perhaps (as we mention in the paper) it only seems possible to remove PC while leaving all other aspects of mentality intact. It may also be an empirical question whether removing a human’s heart would leave all other aspects of her circulatory function intact, but I think we all know how that would turn out were we to perform the experiment. The cases seems rather different (in part, of course, because we’re pretty clear about the function of the heart, and not at all clear about any functions of PC).

    One possibility is that PC is a necessary correlate of some other feature of mentality such that, were PC to be removed, there would be loss of function. This would be analogous to the beating sound of the heart: remove the beating sound and circulation stops. However, PC would still be a byproduct, and not something that has a function that was “selected for.”

    In any case, we think it is worth getting these ideas out, because they are live possibilities. Of course we may be wrong, but given that nobody has any compelling case for the function(s) of PC, it is worth exploring all available options.

  6. Hi Corey. Thanks for explaining. I’ve read the paper now, and thought about your comments, and have a few more things to say. Please don’t feel compelled to respond, though — I don’t want to be pestering you guys if you don’t think this is helpful. Anyway, here it is:

    I think going the zombie route raises more problems for an evolutionary explanation of PC than it solves. The zombie route suggests dualism. But then PC is completely severed from the physical, so looking for an explanation of PC as a neutral or maladaptive product of biological evolution is simply misplaced.

    This isn’t to claim that BAV is epiphenomenalist or dualist — mistakes you head off in the paper. It’s to say that your (zombie) motivation leaves you with a seemingly unstable position.

    Other assorted comments on the paper:

    – You write that if an adaptive value account of PC is true, “then it is impossible to have non-conscious functional duplicates of conscious beings.” That doesn’t follow, as biological analogy (i.e. counterpart of homogeny) illustrates. The fact that something is adaptive doesn’t mean you can’t get the same result in a different way. Similarly for the rest of the paragraph from which that quote is taken, and the paragraph two paragraphs after that. (This isn’t to claim that PC in humans is *necessary,* the response you consider in the paper.)

    – The argument from functionalist intuitions appears to violate functionalism, understood as the thesis that mental properties are functional properties. Functionalism means that same function (at the right scale), same mental properties (including phenomenal properties).

  7. Hi Guys, you raise points I discuss explicitly in my new book (its 1.2 mb), https://home.iprimus.com.au/marcus60/1.pdf The first issue is the role of neurons in both enabling the gross anatomy (non-neuronal cells) and facilitating awareness, which may be an intrinsically combined role.

    One level where this is most apparent is in hormonal secretions enabled by the brain, which are potent and clearly affect our atate of awareness in their distribution across our anatomy. By enabling with awareness, neurons would be intricately chemically suited to represent the gross anatomy in vision, touch etc.

    The second issue is defintional, but perhaps I will leave that to you to explore in a flip through my book if you have time, as I have a strict and clear set of definitions, using common words for recogbnizable processes, no jargon. The key is organizing the known facts properly, which I hope to have done.

    My short answer to the status of ‘phenomenology’, is that neurons represent the gross anatomy ‘as if’ the gross is experiencing real qualities of vision, sound, etc. that are attended by ethereal thoughts.

    It is a neuronal flow from the eyes, ears, etc. to identification of real qualities, then the flow extends between all sites (eyes, ears, hands, etc.) in ethereal S-M cortices as one structure for thoughts attending those real qualities as one common frame for them all, attending them accrding to current priority (looking now, touching later), before responsive outputs are sent from the PMC. I take a reductive approach which extends to a compelete explanation of phenomenology as you go deeper.

  8. Useful discussion. I haven’t read the paper (I don’t do .docx), but I have a question anyway. 🙂

    I view an experience of a toothache as a complex neural process. If we take away the experience (by hypothesis, the neural process), there are radical consequences. I do not go to the dentist. I don’t tell my wife that my tooth hurts. I don’t do a thing, because this perturbation of my consciousness required that you effectively nuked my brain: my world has been obliterated. That’s the only way to remove my consciousness.

    Given this materialist neurocentric approach, I don’t at all see the pull of the claim that ‘creatures like us would not suffer any loss of function were phenomenal consciousness to be “removed” ‘.  It seems to treat consciousness as somehow detachable from the brain, to use the zombie intuition in a way most materialists won’t like.

  9. Corey

    Hi Eric. I’m certainly not going to convince you of much of anything in the space of a blog post. But let me say a few things (and note that I’m speaking for myself at this point, and not necessarily my co-authors).

    I’m completely sympathetic to your materialist, neurocentric approach. But I would still separate the experience of a toothache with the tissue damage of a toothache, and both from the behavioral consequences of a toothache. Suppose there were two very similar organisms (or artificial systems if you like) that, after suffering tooth (or tissue) damage, initiated behavioral patterns that tend to heal (or repair, or mitigate) that damage. That seems to be an important thing to do following such damage. So important that it might even be “selected for.”

    Now, suppose that one such organism has a phenomenal experience of pain that accompanies this damage, while the other does not, while their behavioral repertoires are the same. It seems to me that this phenomenal experience itself is not the kind of thing that could be selected for.

    Our everyday experience suggests the following picture: tissue damage causes pain, and pain causes (or motivates) behavior. On a materialist, neurocentric approach, perhaps tissue damage causes neural firings, which are themselves identical to pain, and which also motivate behavior. We know that anesthetics block both pain and neural firings, suggesting a pretty tight coupling! To me, this still leaves unanswered why the phenomenal experience is there at all, when it seems that a damage-detecting mechanism need not be accompanied by experience (think of all of the mechanisms in our bodies that do very complicated things without our experiencing them in the slightest). One possibility is just that the phenomenal experience is a byproduct or spandrel, and has no selected-for function. But it’s just a possibility.

  10. Eric Thomson

    Corey you wrote:
    To me, this still leaves unanswered why the phenomenal experience is
    there at all, when it seems that a damage-detecting mechanism need not
    be accompanied by experience (think of all of the mechanisms in our
    bodies that do very complicated things without our experiencing them in
    the slightest). One possibility is just that the phenomenal experience
    is a byproduct or spandrel, and has no selected-for function. But it’s
    just a possibility.

    This is one version of Chalmers’ hard problem intuition: given any physicalistic story about consciousness, we will always have the question remaining ‘Why is this process conscious?’ There are different ways to respond to this, but I think my favorite is ‘Why not?’ My best theories evidence suggest that consciousness is a complex brain process. What more can I ask for than fit with our best theories and evidence?

    This seems orthogonal to questions of function. You could ask the Why? about any naturalistic theory, not just functional. Ben’s initial response also seemed on target: we can also imagine blood pumping w/o a heart.

  11. Eric Thomson

    I guess you could argue that there is no good naturalistic theory of representational content in which contents have causal bite, and build that into an argument for epiphenomenalism about representationalist theories of consciousness.

    That would probably be more promising than these claims about the ability of function X to happen without Y. After all, that only shows that Y is not necessary for X, not that Y cannot be sufficient for X, or that Y coupled with some other things, cannot be sufficient for X.

  12. thanks Ben and everyone else for the helpful comments, which i have taken into account in revising the paper. i just want to stress that the view that consciousness is a spandrel or evolutionary accident is consistent with physicalism. i am a physicalist and have no inclination towards dualism. when we mention zombies in the paper, we are neutral between non-conscious physical duplicates and non-conscious functional duplicates. in fact, for all we care there may be “zombies” that are not even complete functional duplicates. they only duplicate all biological functions, while leaving unduplicated non-biological functions. such “zombies” would be possible if consciousness has no biological functions. the goal of the paper is simply discussing the possibility that consciousness is a spandrel or evolutionary accident. we think it’s a possibility that deserves to be considered, if nothing else because it might push us to think about the biological function of consciousness a bit more carefully.

  13. If you reduce it down to physical processes, what you have is chemicals in diverse arrangements (gross anatomy) that have electromagnetic current through them to move them. Current also runs in neurons as channels facilitating the movements, collecting, and processing. The definition of awareness may be current through specific chemical sites, diversifying across eyes, ears, and so on for qualities of awareness that are processed further in shared cortices for thoughts attending those qualities of vision, sound, and so on. For more, read https://home.iprimus.com.au/marcus60/1.pdf

  14. I really like this paper, though I personally think the question of whether consciousness is a spandrel as you pose it is problematically loaded.

    I want to ask you a question I’ve been plaguing people with for a few months now: Given that consciousness is a recent evolutionary artifact, are we not limited to systems primarily tuned to environmental information when we try to cognize consciousness? And doesn’t this mean that we should expect to make the *same* kinds of mistakes we make when cognizing our environments with only parochial information at our disposal? Which is to say, to make ‘false cause’ inferences, Ptolemaic-type perspectival errors, and confuse fragments for wholes and aggregrates for individuals?

    If this is the case (as I think it is), then it seems to me that consciousness actually *should appear* to be a spandrel, even if it is not. Why? Simply because the drastic limits on the information available to attentional awareness (compared to the complexities of the greater brain) necessitate that consciousness remain blind to the myriad, actual neurofunctional contexts of the information it does receive.

    Imagine the whole brain is War and Peace written in the original Russian, and the conscious subsystem of the brain can only access English phrases here and there, which, given that it lacks any information pertaining to the whole, it has sutured into a lovely haiku it calls ‘Life-world.’

    Shouldn’t we expect that ‘Life-World’ will not seem to belong or ‘fit’ in our Russian War and Peace? To seem like a spandrel?

    The upshot of this thought experiment is that it reveals, at the very least, a profound assumption you make in your paper: that the consciousness you *think* you have is actually the consciousness you got.

    If you answer my original question in the affirmative, I think you’re committing to the likelihood that you do not, something which seems to make questions regarding the global functionality of consciousness premature. Which consciousness? The one you have or the one you think you have?

Comments are closed.

Back to Top