2. Composite Subjectivity and Functional Structure

Consider a contrast. The solar system contains my brain as a part; my brain is conscious; the solar system is not conscious (at least in any everyday sense – let’s set panpsychism aside for now). That’s enough to show that having conscious parts is not enough, all by itself, to be conscious.

Consider a different example: my brain and me (I’ll be calling the whole organism ‘me’: if you prefer you can call it ‘my whole body’). I think the most natural thing to say about my brain is that it’s conscious, and the most natural thing to say about me is that I’m conscious. Suppose we hold onto both those claims: it looks like we have two conscious beings, each a part of the other. But they’re not independently conscious: I’m conscious because my brain is. Nor are they undergoing two distinct streams of consciousness: I share the experiences of my brain.

So what’s the difference between my brain’s relationship to me, and its relationship to the solar system? Something like: my brain controls what I do, and my brain is sensitive to what happens to me. Many of ‘my’ inputs and outputs correspond neatly with inputs and outputs of my brain, so that states which play a certain functional role in my brain play roughly the same functional role in me. By contrast, my brain’s inputs and outputs have virtually no effect on the overall behaviour of the solar system. Now, as functionalists have found, it’s not so easy to precisely spell out this ‘playing functional roles’ in relation to ‘inputs’ and ‘outputs’ (cf. Putnam 1965, Lycan 1979, Shoemaker 1981, Block 1992), and sadly I can’t help much there. But in developing ‘functionalist combinationism’, I felt like the obvious way to articulate the difference between the relations ‘me – my brain’ and ‘the solar system – my brain’, is to appeal to something in this ballpark (from page 183):

Conditional Experience Inheritance: If a part of X has an experience which plays a certain functional role, then X shares that experience, with the same or similar functional role, to the extent that the part in question is connected to the rest of X such that its experience has sensitivity to, control over, and co-ordination with other events occurring in X.  

Of course, many people are reluctant to say that both me and my brain and conscious, even though it’s decidedly awkward to describe either as not conscious. This has been called the ‘too-many minds’ problem (see, e.g., Burke 1994, 2003, Merricks 2001, Unger 2006, Gilmore 2017): intuitively there’s only one mind here, but both my brain and me seem like we should have minds (not to mention my head, the top half of my body, etc.), and it seems wrong to deny consciousness to one simply because of the other.

Following Sutton (2014), I analyse cases like this as genuinely containing more than one subject, who instantiate consciousness in a ‘non-summative’ way, i.e. they are both consciousness, but the consciousness of one is nothing in addition to that of the other (which is why we intuitively resist saying there are two of them). In short: two subjects sharing one stream of consciousness.

This kind of case is what I call ‘trivial combination’: the whole’s properties come entirely from one part. But once we accept trivial combination, and the principle of conditional inheritance, we can apply the same framework to cases with multiple conscious parts. Confronted with a human body containing two separate brains (one in the head and one in the foot, suppose), depending on which brain had control and sensitivity over the whole body, we might attribute one or the other’s experiences to the whole. What if both brains have control and sensitivity, because they work together to coordinate the body’s overall behaviour? Then, I think, it would make sense to say the whole has both sets of experiences. Then there would be three subjects (the whole and each brain) with different sets of experiences.

Here some readers might be thinking:

“Hang on – one body controlled by two brains working together? Isn’t that our actual situation?”

After all, both cerebral hemispheres have some capacity for consciousness independently of the other. I’m happy to accept this (though nothing I say requires it), but it raises some hard questions for functionalist combinationism.  

The obvious difference between us, with our two hemispheres, and our imagined person with two separate brains, is that in the former but not the latter case the whole’s consciousness is unified. Even if we accept that each hemisphere (or even smaller brain structures) is a subject in its own right, its experiences are intimately interwoven with those of its neighbour, in a way that goes beyond just coordinating to control the body.

Presumably, what makes the difference is the subcortical, and callosal, connections between the hemispheres. Something about the relationship they enable serves to unify the whole brain’s experiences. So far so obvious. Unfortunately, I don’t have a well-developed theory of this unifying relationship: understanding it is an ongoing project for everyone interested in the mind. (I sometimes call it ‘phenomenal binding’, but that’s just a label marking my ignorance.)

What’s distinctive about functionalist combinationism is that this unification doesn’t rule out each hemisphere also being conscious: each might be undergoing some experiences, and not undergoing some others, despite the two sets being intimately unified. In Combining Minds I try to sketch out the idea of seeing ‘phenomenal binding’ not just as a two-aspect relation (with both physical and mental aspects), but a three-aspect relation (with physical, within-subject, and between-subjects aspects).  

I’m not qualified to say which brain parts are actually conscious – that would require knowing what it takes for something to be conscious, as well as understanding the multi-layered structure of the brain much better than I (and perhaps anyone) currently do. But people do sometimes offer models on which the whole brain accomplishes some mental operation by having parts which, operating by themselves, do things which we might regard as mental operations (e.g. Selfridge 1959, Dennett 1991, Lycan 1995, Zeki 2007). People tend to assume that any mentalistic description of these parts has to be taken as in some sense metaphorical, if not fallacious. Functionalist combinationism aims to dislodge that assumption: parts of our brains being genuinely conscious is a live option, and isn’t ruled out by them being parts of us.

The reverse also holds: us being conscious doesn’t rule out a system composed of us also being conscious. One section of chapter 6 is devoted to Ned Block’s ‘Nation-Brain’ thought experiment (1978), where human beings with walkie-talkies collectively simulate a human brain. I argue that the Nation-Brain would be conscious, and go through the reasons why this seems so counter-intuitive. Of course, actual human nations aren’t organised like the Nation-Brain: are they also conscious? Like ‘which brain parts are conscious?’, this is too big a question for functionalist combinationism by itself to answer. Functionalist combinationism is just a framework in which we accept that the answer might be ‘yes’, and that this would neither call into question our own consciousness nor require any ‘extra’ consciousness above and beyond it.


References:

Block, N. (1978). “Troubles with Functionalism.” In Savage, C. W. (ed.), Perception and Cognition: Issues in the Foundations of Psychology¸ University of Minneapolis Press.

Burke, M. (1994). “Dion and Theon: An Essentialist Solution to an Ancient Puzzle.” The Journal of Philosophy 91 (3): 129-139.

Burke, M. (2003). “Is my head a person?” In K. Petrus (ed.), On Human Persons. Heusenstamm Nr Frankfurt: Ontos Verlag: 107-125.

Dennett, D. (1991). Consciousness Explained. Penguin Press.

Gilmore, C. (2017). “Homunculi are People Too! Lewis’ Definition of Personhood Debugged.” Thought: A Journal of Philosophy 6 (1): 54-60.

Lycan, G. (1979). “A new lilliputian argument against machine functionalism.” Philosophical Studies 35: 279-287.

Lycan, William G. (1995). Consciousness (1987), MIT Press.

Merricks, T. (2001). Objects and Persons. Oxford: Oxford University Press

Putnam, H. (2003). “The Nature of Mental States.” (1965) In O’Connor, T., Robb, D., and Heil, J., (eds) Philosophy of Mind: Contemporary Readings, Routledge: pp.223-235

Selfridge, O. (1959). “Pandemonium: A Paradigm for Learning.” Symposium on the Mechanization of Thought Processes, HM Stationery Office.

Shoemaker, S. (1981). “Some Varieties of Functionalism.” Philosophical Topics 12 (1): 93-119.

Sutton, C. S. (2014). “The Supervenience Solution to the Too-Many-Thinkers Problem,” Philosophical Quarterly 64(257): 619–639.

Unger, P. (2006). All the Power in the World. Oxford University Press.

Zeki, Semir (2007). “A Theory of Micro-Consciousness.” In Velmans, M., Schneider, S. (eds.) The Blackwell Companion to Consciousness, Blackwell: 580-588.




6 Comments

    • Luke Roelofs

      Excellent questions! I have thought about octopuses, and honestly they sort of deserve a section in the book, but I only became familiar with some of the work on their neural organisation quite late in the project, and the book is honestly edging towards ‘too long’ anyway, so they didn’t really make it in beyond some passing mentions (at one point I say “I cannot here attempt a full analysis of, for example… the relationship between consciousness in an octopus’s central brain and consciousness in its arms (though I suspect that case is analogous to the split-brain phenomenon in many respects)”).

      On the Chinese Room, I think there are two different issues in play. One is that we’re very averse to ascribe mentality to obviously composite systems, especially ones containing people as parts. I think that’s an aversion we need to overcome, so to that extent I’m in agreement with Dennett in favouring the ‘systems reply’ (that the system could understand things even if the man in it doesn’t). But I also think you could think that the internal structure of how information is processed matters independently of what pattern of outputs it produces, and while the Chinese Room gets the same pattern of outputs we do, its internal structure is wildly different – all operations go through a single laborious bottleneck. So (as with some versions of the Nation-Brain, and with the ‘Blockhead’ (https://en.wikipedia.org/wiki/Blockhead_(thought_experiment)) I’m inclined to think there’s possibly no genuine mentality there, at least of the sort we’re familiar with.

  1. Shaikh Raisuddin

    The first question is, “What is meant by consciousess?” Can we explain consciousness without a precise definition?
    The second question is, “What makes an identity?” eg. me, solar system, brain etc. to comparare the consciousness’ of container and contained. A baby is conscious in mothers womb. What separates physically one identity from other? (Not separation by names)
    The third question is, “Is there anything which is not conscious?”
    The fourth question is, “Is there anything which is NOT indivisible?”

    Consciousness is a COLLECTIVE PERFORMANCE of all parts. So it is true that subjectivity is composite of parts for the whole.

  2. Dave

    Hi Luke,

    This is wonderful stuff, thanks! I should, and will, read your book. You say that you don’t have a worked out theory of whatever organisation is responsible for ‘phenomenal binding’. But the functional definition you give in ‘Conditional Experience Inheritance’ looks like it should be a necessary feature of whatever organisation you (or your future disciples!) go on to specify. Is this right? And, if so, can you say something more about what you think we’d need to add to this necessary feature in order to provide necessary and sufficient conditions?

    • Luke Roelofs

      Thanks Dave! So, those two things are sort of answers to different questions: the functional conditions are about whether a given subject has a given experience, while phenomenal binding is about how two experiences are related. So there’s not a direct translation to be made. Moreover, the former is focused on a sort of practical ‘getting the right overall behaviour’, more than on the phenomenology of it – two well-trained but neurologically normal people could get really good coordination in control of a single robot, without that requiring much more unity among their experiences than is common in social interactions.

      (Admittedly, the movie ‘Pacific Rim’ seems to implicitly disagree with this claim – they posit that some funky neural telepathy is necessary for coordinated control of a single robot, although that might be more strictly ‘necessary for sufficiently-smooth coordination to beat up kaiju’, a standard which arguably many individual humans are too clumsy to meet…).

      Also, just to note, the reference to ‘control, sensitivity, and coordination’ is very much a placeholder, a gesture in the direction of something that needs more work to really analyse it. In the book I suggest that the best candidate for (the physical aspect of) phenomenal binding might be something like ‘informaton integration’, understood in a way that owes a debt to Tononi’s IIT but doesn’t commit to his whole framework. But that’s also a placeholder, more than a proper analysis.

      • Dave

        I see – very helpful, ta! I had it in mind that the functional criterion constituted an appeal to something like Hurley’s conception of ‘intentional access’ to the contents of a state as a necessary condition for phenomenal unity, but I think your comment above has straightened me out.

Comments are closed.

Back to Top