Self-Consciousness and “Split” Brains: The Duality Claims

The book argues for three duality claims: one concerning split-brain consciousness, one concerning split-brain intentional agency, and one concerning split-brain psychology generally.

Each of the duality claims amounts to a claim about personal identity. If there are two centers or streams of conscious there must be two subjects of conscious experience; two centers of agency means two agents; two minds means two thinking things, that is, two thinkers. Defending the duality claims thus requires two arguments. The first, substantially empirical argument, is that split-brain psychology is importantly divided along hemispheric lines, in the realms of consciousness, cognition, and the control of action. The second, substantially theoretical argument, is that such divisions in psychological activities and actions entail multiplicity of psychological beings.

Let’s start with the overarching, substantially theoretical argument (which owes a significant debt to Shoemaker’s 1984 materialist theory of personal identity). Let us use the term thinker broadly, now, to refer to the psychological subjects and agents of mental states and activities of various kinds, so that a thinker is not just a reasoner but also a decision-maker and subject of experience and so on. What is a thinker? One possibility is that a thinker is just an animal with various (psychological) capacities, but that the individuation conditions for animals are purely biological and not psychological. If that is right, then since (I assume) split-brain subjects are unitary qua animals, they are unitary qua thinkers as well. The personal identity debates about split-brain subjects have always been in-house debates between proponents of psychological views of personal identity. It’s beyond the scope of this post or even really the book to defend psychological accounts of personal identity against animalist accounts; I’ll just note that my own reading and observation has led me to conclude that it’s difficult to be a thorough-going animalist in practice.

On a psychological account of personal identity, thinkers will be (at least substantially) causally defined. That is, just as to be a thinker is to be a thinking system, and the individuation conditions for thinkers will be the individuation conditions for thinking systems. Well, what is a thinking system? It’s a system that thinks, where thinking itself is causally defined. That is, a thinking system is one that operates in a certain way, or one whose causal activities are of such-and-such types. Many or most of these activities (on some views, all of these activities) will be defined in terms of the internal operations of such systems: what its internal states are and the way these states interact with each other. So, thinking is a collection of kinds of causal activities, and to be a thinker is just to be a system that thinks.

Split-brain surgery changes the causal activities of the brain. The 2-thinkers claim amounts to the claim that, after split-brain surgery, the activities that constitute thinking operate within each hemisphere system—within R and L, as I call them—more nearly than they do within the split-brain subject as a whole, S.

I’ll pause to note that I don’t take R and L to be the RH and the LH, respectively. I rather take R to be the entire split-brain subject minus the LH and L to be the entire split-brain subject minus the RH. Technically, even this may be a simplification—I doubt their boundaries can be quite so neatly delineated in gross neuroanatomical terms—but qua candidate thinkers, R and L are an improvement over the RH and the LH. Among other things, in terms of their intrinsic structure and capacities, R and L are each equivalent to things that we know to be thinkers: human beings who have undergone (respectively) left hemispherectomy (removal of the left hemisphere) or right hemispherectomy (removal of the right hemisphere). One cannot say this about a mere hemisphere.

The 2-thinkers claim thus amounts to the claim that after split-brain surgery, the causal activities that constitute thinking operate within each hemisphere system (within R and within L), though naturally they will thereby also operate literally within the split-brain subject as a whole, S, since S is composed of R and L. R and L are thinkers more so than is S, however, because if you look at the way RH mental states interact with LH mental states, that interaction is (generally) not of the kinds that constitute thinking. Intrahemispheric interactions are of the causal kinds known as thinking.

The next step of the overarching argument, then, is to specify the kinds of causal activities that constitute thinking. The book does this only in general terms, by noting certain very basic and reliable generalizations about e.g. how percepts give rise to beliefs, how intentions are formed, and so on. These are generalizations about how mental states can interact with each other. Our very concepts of percepts, beliefs, intentions, and so on, are grounded in these kinds of generalizations. For instance, beliefs are caused by percepts, and not vice versa. Of course this is just a generalization; believing can cause perceiving, too. But we should distinguish two ways in which believing can cause perceiving.

First, believing can cause perceiving directly: I believe that there is someone following me, as a result of which I come to hear someone following me (though no one is). Assuming that this is possible (that it’s not just the fear evoked by my belief that causes the misperception), all I’ll say is that such cases must be to some degree deviant. To discover that beliefs can cause percepts just as much as percepts can cause beliefs would be to discover that, well, there are no percepts or beliefs.

Second, believing can cause perceiving indirectly: I believe that my hairbrush is under the bed, which leads me to look for it under the bed, as a result of which I come to see my hairbrush under the bed. This is a non-deviant way in which beliefs cause percepts; indeed, if you pause to think about it, it is obvious that I am constantly perceiving things because of some earlier contribution from my beliefs. Yet when I initially said that percepts cause beliefs, and not vice versa, this second class of “exception” to that general rule probably didn’t occur to you—though it is hardly exceptional! Why is that?

It is because, when we think about mental states of various kinds, we think of them using causal generalizations that concern direct mental state interaction. Direct mental state interaction is interaction between mental states that is not mediated by action and perception. In the hairbrush case, the interaction between belief and percept is so mediated, and is thus indirect. In the book, I suggest that indirect mental state interactions are too heterogeneous to ground generalizations about kinds of mental state. After all, indirectly, beliefs cause percepts just as much as percepts cause beliefs. When we assume that percepts cause beliefs, and not vice versa, what we really mean is that percepts directly cause beliefs. To the extent that the identity conditions of percepts and beliefs are grounded in causal generalizations about percepts and beliefs and how they interact, then, they’re grounded in a less heterogeneous set of causal generalizations: generalizations about how they directly interact.

At first pass, then, the empirical basis of the 2-thinkers claim is this: that after split-brain surgery, direct mental state interaction is substantially confined to intrahemispheric interaction, while interhemispheric mental state interaction is predominantly indirect. Recall the example, from yesterday, of how you could get the split-brain subject (or, L) to finally say the word “Key”: by letting him (or, L) see that he (or, R) had manually selected the image of the key. And again the theoretical basis of the 2-thinkers claim is to argue that thinking—perceiving, believing, decision-making, etc.—consists of causal activities whose identities depend upon their capacity to directly interact. Thus thinking is revealed to operate more nearly within each hemisphere system after split-brain surgery than across such systems.

Shoemaker, S. 1984. Personal identity: A materialist’s account. In S. Shoemaker and R. Swinburne (Eds), Personal identity. Blackwell Publishing.

14 Comments

  1. Luke Roelofs

    Hi Elizabeth,
    Great to see these posts about the book. I was wondering if I could ask a couple of questions about how to draw certain boundaries.

    One is about the notion of direct and indirect interaction. Obviously I do get the intuitive distinction at work here, but it seems hard to spell out exactly. You say indirect interactions are “mediated by action and perception”. I take it that ‘action’ here is meant to exclude mental actions, like focusing attention, trying to imagine or remember something, etc., because including those would make too many mental-state interactions ‘indirect’. (Likewise for ‘perception’ if we think of introspection as perception-like.) Does it mean something like ‘muscular action’? But then we might worry about various sorts of ’embodied cognition’ processes, where the musculature and other non-brain body parts seems to play a key role in some mental process (e.g. my emotions are recognisable by me only after eliciting facial or endocrinological changes that I then somatosensorily detect, or my larynx is used to understand speech or talk to myself).

    The other question is about the boundaries of the two subjects, R and L. You say that you “doubt their boundaries can be quite so neatly delineated in gross neuroanatomical terms” as to call each one ‘the whole person minus the other hemisphere’. I’m wondering how you think of the boundaries being drawn? I get the impression you have in mind something like ‘those body parts which are under the subject’s perception and control’, so that if LH couldn’t control the right foot at all, then L might not include the right foot as a part? But then it’s not clear why my hair or nails count as parts of me… I recognise these aren’t questions that it’s especially pressing on you to answer, but I’m interested in what you think.

    A third, related question: is the relationship between L and the LH analogous to the relationship between me and my brain?

    • Elizabeth Schechter

      Hi Luke,
      Thanks for these three great questions. I’ll respond to them in reverse order.

      First, the relationship between L and the LH is more closely analogous to the relationship between me and my cerebral hemispheres. I say that because…. the LH isn’t the only part of L’s brain, in the same way that my cerebral hemispheres are not the only part of my brain.

      Second, you ask how the boundaries of R and L are to be delineated more precisely. You mention the (non-neural) body, and point out that we stand in different functional relations to these different parts, e.g. fingernails themselves are not innervated. (Well, I guess because the fingerbed is innervated, a person can still feel their own nails e.g. getting ripped off, so that’s a straightforward sense in which it’s theirs–but someone might have total paralysis and lack of sensation in a leg, and yet the foot at the end of that leg would still count as theirs–I think that’s the same point.)

      I think it has to be true that different parts of our bodies will count as ours in different senses–that’s part of what I take to be suggested by your example. And this does mean that in cases of co-embodiment (where multiple thinkers at least partially share a body or have bodies that are physically bound), there will be bodily parts that are parts of one thinker’s body in one sense but part of the other thinker’s body in a different sense. We can see this in cases of conjoined twins. On the one hand, if there’s a twin who can (directly) move a limb while the other cannot, there’s a sense in which it’s the first twin’s limb and not the second twin’s limb. On the other hand, if the limb becomes badly infected–they will both die. So in that sense, the limb belongs to both of them. I think similar remarks will apply to the split-brain case. If it were only R that could move or feel the left hand (which is not the case, but suppose that it were), then that hand would for many purposes count as R’s hand, e.g. when trying to understand the motives behind left-handed behavior! On the other hand for other purposes the hand would count as L’s too; e.g. cuffing the left hand to a table would count as restraining L in a room, too.

      I’ll note quickly though that I had actually meant to allude to how to delineate R’s and L’s brains. For it’s possible that certain *non-cortical* neural structures are parts of R rather than L (in one of the relevant senses–again e.g. an infection anywhere in the brain imperils both, so in that sense any part of the brain belongs to either of them). In fact it’s possible that some parts of the LH belong to R as much as to L, and in the same sense; in principle it’s even possible that some parts of the LH belong to R more so than to L! I’m interested in delineating R and L in psychobehavioral terms, so it’s an open question what the relevant psychobehavioral relations are and which thinker bears them to or because of which neural systems.

      Okay so for the first question: this is really important because, yes, it’s possible as you suggest that in the case of emotions for instance, ordinary emotional experience always involves indirect mental state interaction. E.g. I see a bear, certain somatic-physiological changes are initiated by my amydala, these somatic changes are represented in the insula, that representation + possibly the changes themselves + whatever else = fear. But even setting aside the proper theory (and neural basis) of emotions—look, obviously, indirect interactions between mental states happen all the time even *within* a mind. So the point can’t be that when two mental states interact indirectly, they aren’t of the same mind. It’s not even prima facie evidence that they aren’t of the same mind, since many types of mental state aren’t supposed to cause others directly. (E.g. the desire that X be the case is not supposed to directly cause the belief that X is the case.) The question is whether mental states are capable interacting in the ways characteristic of the kinds of mental state that they are. So the basis of the 2-thinkers claim isn’t that e.g. RH desires that X be the case fail to directly cause LH beliefs that is the case. The basis of the 2-thinkers claim is that e.g. RH perceptual experiences of X fail to cause LH perceptual judgments about X.

      Another important point to keep in mind is that really the 2-thinkers claim rests upon showing two things: first, lack of the relevant sort of interaction between two systems; second, that each of the systems is even a candidate thinker with respect to its intrinsic structure and capacities. Let’s take an extreme example for clarity: suppose that every experience of fear is constituted by a visual percept, a somatic-physiological response, a somatosensory representation of that response, plus a thought about the cause of that response. This would make every emotional experience involve indirect interaction. But now we can ask: does this mean that every emotional experience involve two (or more) thinkers? So we look at that visual percept and that thought, which by stipulation for the purpose of the example can interact only indirectly. Now in order for the visual percept (M1) and the thought (M2) to be mental states of distinct thinkers T1 and T2, respectively, it first needs to be the case that, were T2 not to exist, T1 would still meet the criteria for being a thinker (so the system needs to be complex enough—I take it that the system can’t have visual percepts only, but must also be capable of desiring, deciding, etc.), and similarly for T2; and then second it needs to be the case that that T1’s states cannot interact with T2’s directly in the way that they can with T1’s, and similarly for T2’s states. But once we’ve located M1 within a system T1 with the relevant complexity to even be a candidate thinker (i.e. again T1 can’t be *just* a visual system)—then it is far from clear that in the non-split case, T1’s mental states are going to interact with T2’s only (or even primarily) indirectly. For instance, T1’s visual percepts will reliably give rise to T2’s perceptual judgments.

      • Luke Roelofs

        Thanks, that clears things up a lot. So it’s not that direct interactions are within a mind and indirect interactions are between minds – indirect interactions can be either? It’s that for two states to be within a mind *requires* that they interact in a certain specific set of direct ways. So this leaves open that there might be direct interactions between distinct minds, if they aren’t systematic enough to validate all the defining laws?

        I like the idea that different body parts might belong to different thinkers in different senses, based on different sorts of functional relation to the thinker. But (just to indulge the pedantic metaphysician who lives inside me) it seems like that requires a prior idea of the thinker’s identity (so that we can evaluate the relations that obtain between it and various body parts), and if you thought of ‘what are the parts of something’ as intimately connected to ‘what is this thing, what individuates it’, (e.g. if you had a very metaphysically basic, classical-mereology-inspired, sense of ‘part’ in mind), you’d want to ask ‘well what parts does the thinker have according to that prior idea of its identity?’ (Perhaps another version of this question might be ‘what are the essential parts of, e.g., LH?’) I sometimes get the feeling that on the psychological theory of personal identity, thinkers don’t really, strictly, have any physical parts, only physical structures that realise or support their psychological structure at one time or another. Would that fit with how you’re thinking of thinkers?

        (It makes sense to not analogise the LH to a brain, given how much of the brain is subcortical. But then what is the counterpart of a brain – the neural system that L has which underlies their thoughts the way my brain underlies mine? Apologies if this is clarified in the book, I’ve only been able to find discussion of hemisphere systems, which seem to be L and R (i.e. things with hands, feet, etc.).)

        • Elizabeth Schechter

          Hi Luke,

          Thanks for the follow-up!

          L’s and R’s brains: at first pass, the thought is that L’s brain = the entire brain minus (excluding) the RH, while R’s brain = the entire brain minus (excluding) the LH. So, L’s brain is, in terms of its intrinsic structure and capacities, like the brain of someone who has undergone right hemispherectomy (removal or functional incapacitation of the RH); R’s brain is, in terms of its intrinsic structure and capacities, like the brain of someone who has undergone left hemispherectomy (removal or functional incapacitation of the LH). Again this is quite likely *ultimately* too crude, but that’s the rough idea.

          Direct/indirect distinction: yes, as you say, indirect interactions occur both within and between minds. In fact if you construe “mental state interaction” broadly enough, most “intra-mind” interaction is also indirect. (I’m having all the experiences I’m currently having because of life decisions I made years ago, etc.)

          Now you ask whether the claim is that “for two states to be within a mind *requires* that they interact in a certain specific set of direct ways”–I hesitate here because…. There might be various random reasons why two particular, token, co-mental states can’t interact with each other in whatever way–but *one* reason might be that they belong to different minds, that they are each part of a distinct system whose states *generally* lack the capacity to interact with those of the other in the necessary ways. Does that make sense?

          And yes I like the way you put it–that “there might be direct interactions between distinct minds, if they aren’t systematic enough to validate all the defining laws”. Obviously in the ordinary case–the case of thinkers with wholly discrete bodies and brains–direct mental state interaction will be exclusively “intra-mind.” There is no other way for my mental states to interact with yours other than by one of us doing something and the other sensing or perceiving it. But I don’t think that *any* amount of direct interaction between mental systems makes them constitute a single mind. One case I discuss in the book is that of Krista and Tatiana Hogan, the conjoined twins whose brains are joined at the level of the thalamus. There is some suggestive evidence that if e.g. Krista closes her eyes, and Tatiana looks at something, Krista can have a visual experience of the object. Now this may or may not involve direct interaction between their mental states per se. (Maybe the girls in some sense share eyes; maybe as long as Tatiana’s eyes are focused on something, Krista can visually experience it, whether or not Tatiana has a visual experience of it.) But it seems possible that direct mental state interaction is involved–that Krista’s visual experience of what Tatiana is seeing depends upon Tatiana’s visual experience of it, for instance. I take it that we don’t have to wait and see which of these is the case in order to know whether or not they
          are distinct thinkers. They are distinct thinkers, though not in a sense wholly discrete minds, if there is some direct interaction between their minds.

          Your second question–yes, I have thought about this, and I’m still not sure what I want to say. For I think that even if you deny that thinkers have physical parts (strictly speaking), you run into a similar kind of issue–I mean how do you get the preliminary identification and distinction of thinkers off the ground? I guess I don’t *think* that the split-brain case will necessarily be wholly unique though in involving a kind of back-and-forth between preliminarily identifying thinkers on one kind of basis and then refining that identification on another kind of basis. That is, there will be some kind of preliminary identification of a thinker individuated on the basis of its physical boundedness, but this individuation is preliminary and can be revisited on the basis of the psychobehavioral evidence–which is itself interpreted partly on the basis of growing knowledge of the thing’s physical-structural properties, etc.

          Side note–I think the standard thought is that in the split-brain case, the only candidate thinker we posit, preliminarily, is the animal–individuated on the basis of purely physical properties–and then it’s the psychobehavioral evidence alone that makes us wonder whether there are in fact two thinkers constituting the animal. One thing I argue in the book though is that it’s not clear that this is *in practice* exactly how it goes. After all, no one ever learns about the split-brain behavioral evidence *without simultaneously learning about split-brain surgery*–and any description of the surgery naturally mentions the new physical separation *between RH and LH*. These distinct physical objects are thus offered up as *candidate* distinct thinkers, arguably. In other words, the first stage in individuating these two co-embodied thinkers involves identifying two things that are distinct qua purely physical objects. Does that make sense?

          • Luke Roelofs

            “There might be various random reasons why two particular, token, co-mental states can’t interact with each other in whatever way–but *one* reason might be that they belong to different minds, that they are each part of a distinct system whose states *generally* lack the capacity to interact with those of the other in the necessary ways. Does that make sense?”

            I think that makes sense – it sounds like if two states within a single mind don’t interact in the kind-defining direct ways, that demands explanation, and should generally be anomalous rather than typical (as you say about perceptions and beliefs – sometimes I might fail to believe my perceptions, but if I never do they’re not perceptions). Though just to check – “one reason might be that they belong to different minds” is meant to be compatible with thinking that their belonging to different minds is *constituted* by facts about interactions?

            “in the ordinary case–the case of thinkers with wholly discrete bodies and brains–direct mental state interaction will be exclusively “intra-mind.””
            Right. But maybe we would change that if we developed brain-interfaces enough that activity in one brain could be detected, transmitted to a device on someone else’s head, and then modulate the latter’s brain activity. If it’s not experienced by the two people are action or perception, am I right in thinking that might be a way to establish direct interaction?

            What you say about preliminary identifications and their refinement sounds very plausible!

  2. Julia

    Your claim that each thinker is the whole subject minus the other hemisphere made me wonder about how the two thinkers claim relates to emotions. It makes sense to me how the two thinkers can each have their own beliefs, draw their own inferences, etc. But suppose you showed one hemisphere something scary, but not the other one. If fear includes some kind of bodily component, would both thinkers feel scared? Or would only the thinker who saw the scary thing feel scared, because the other one wouldn’t have all the components necessary for properly feeling the emotion? I guess this might be difficult to test, but it seems like an interesting question.

    • Elizabeth Schechter

      Thanks Julia for the great question!

      You know, interestingly, neuropsychologists seem to have accepted pretty early in that emotional experience remains more “unified” after split-brain surgery than does visual experience–and thus in my view it wasn’t studied as thoroughly or systematically as, at least, a philosopher might have hoped. There was a lot more work on interhemispheric transfer and interaction in the realm of vision than in the realm of emotion. Of course, vision experiments dominate in experimental psychology generally.

      From what I can tell there are three reasons why neuropsychologists accepted pretty readily that emotional experience remains unified after split-brain surgery. First of all, the surgery sections only cortical structures, and the circuitry of emotion is substantially subcortical. Second of all, there were several experimental results in which, yes, a stimulus of a sort one might expect to elicit amusement or anxiety or embarrassment would be presented to the RH, and then the subject would voice (via the LH) some sort of amusement or anxiety or embarrassment, without being able to say what the stimulus was. Third–and I’m being more speculative here–we also tend to think of emotional experience as bodily and perhaps also somehow messy and amorphous, rather than crisp and divisible. It’s thus probably pre-theoretically intuitive that emotional experiences should remain unified after cortical surgery.

      If we take assume unity of emotional experience and seek to explain it in terms of the intactness of subcortical structures–it’s possible that one of their primary roles is to sustain direct interhemispheric sharing or transfer of emotional experience. But it’s also possible that their role is rather to initiate the kinds of bodily responses you mention, responses that are then (in the relevant sense) independently perceived and interpreted by the two cortical systems.

      However I will say that in researching the book it came to seem less clear to me that there is such unity of emotional experience after split-brain surgery. I mean again there were fewer studies here than one might want. But there were cases in which it wasn’t clear that both hemisphere systems were experiencing similar emotions. And in the cases in which they did seem to be undergoing similar emotional responses….. The stimuli presented to one hemisphere were of really powerful sorts that actually evoked pretty overt behavioral responses–think sitting back sharply in your chair, or laughing out loud, as opposed to a mere quickening of the pulse. In such cases it would be natural to think, “I saw something aversive or alarming” or “I saw something funny.” And in fact interpreting these behaviors as expressions of alarm or amusement doesn’t even require co-embodiment: if I knew that you were going to be visually presented with something, and I watched you sit back sharply in your chair (jerking away from the stimulus), I would think, “She saw something aversive or alarming”; if I heard you laugh, I would think, “She saw something funny.” So a lot of the results seem explicable without even appealing to co-embodiment per se–R is shown something, R laughs, the subject is asked “Why did you laugh?”, L says, “A funny picture, I guess.”

      There must be more intimate interhemispheric interaction in the realm of emotional experience than this–depending upon more subtle physiological changes, hormonal/endocrine responses, etc.–but the most famous examples of unity of emotion after split-brain surgery seem explicable in terms of more overt behavioral responses initiated by one hemisphere system but simultaneously perceived and interpreted by the other.

      • Julia

        Thank you for the super thorough response. This is really interesting. I didn’t realize that a lot of the structures that are thought to be responsible for our emotions are left intact by the surgery. Although I guess I know remember that maybe there are cases where one hemisphere is bored or angry and the other one isn’t? I seem to remember a case in which a split brain patient said when he wanted to read he had to sit on his hand, because his hand would try to take away the book from the other hand to make him stop reading. Do I remember this correctly? In that case, the non-speaking thinker seems bored with or angry at what the other one is doing.

        • Elizabeth Schechter

          Hi Julia,
          Thanks for the follow-up! You’re right that there are actually quite a number of cases in which one hemisphere system seems annoyed by something the other is doing. I hadn’t thought about these cases in those terms, but there are for instance a number of cases in which L expresses frustration with something R has done.

          (Obviously it’s harder to tell when R is frustrated, though you’re right that if pushing the reading away is an intentional action, presumably it reflects something like boredom or frustration.)

          Technically in those cases you don’t know that L’s frustration hasn’t simultaneously made R frustrated, i.e. there could still be a kind of unity in emotional experience. That said…. there would be a certain amount of disunity involved, too, since L attributes its frustration to R having done whatever R did, and R won’t attribute its frustration (their frustration?) to that.

  3. Eric Mandelbaum

    Hey Lizzie,

    Thanks for this really interesting post. I have some (perhaps) exotic questions. Let’s make a distinction between something like cognitive style–the type of thinker I am–and general psychological laws. I take it that if L and R are both thinkers then they are both covered by the same psychological laws. So if it’s a law that all beliefs act in a specified way, then both L and R have beliefs that act in that way (I assume anything that’s a thinker will have beliefs, but maybe this assumption is no good–I’d be into hearing arguments against it for sure). These beliefs will presumably have some different contents–they will be different beliefs (for they are different thinkers, by hypothesis)–but their functional profile should be the same (as it is for beliefs across people). This seems to be the spirit in which you discuss “generalizations” above. So far, I’m on board with everything you say (though I’m more skeptical about the possibility of direct interaction).

    What I’m wondering is whether cognitive style differs between L&R. Take cognitive style to hold over a range of ways one thinks–cognitive style is like one’s thinking personality. You can have high need for cognition, or a high propensity to mind- wander, or be a cognitive miser, or be a yea-sayer, or a nay-sayer, or be inclined to rumination, or have a very specific type of topic that you’re apt to ruminate on or……many other more interesting things that people more creative than I would add to the ellipses.

    My q: do we have any reason to expect L&R to differ in style? I imagine the inputs to thought may be drastically different for L&R, but that shouldn’t matter so much, or so it seems to me. If I go blind tomorrow, the inputs to my thought would change, but my cognitive style would (presumably) be unaffected. Similarly, I imagine that though the inputs to L&R change (e.g., no more input from one of their eyes), that it shouldn’t change their cognitive style.

    Imagine: they don’t differ in style. If not, then L&R seem, at least at first blush, to be very similar thinkers. Perhaps they would be identical thinkers if you individuate thinkers thin enough to exclude inputs to thought. On the other hand, say L&R do differ in style. Then perhaps hemispheric functional specificity can help us understand what the factors are that go into dictating cognitive style.

    Anyway, I know more or less nothing about the neurological factors that affect cognitive style, so really anything you have to say will be illuminating. I look forward to reading the book (but embarrassingly haven’t yet!), so if any of this is addressed there please let me know.

    Cheers (and thanks for doing this),
    e

  4. Thank you Eric!

    It’s funny; the first time I was ever asked question about R and L and cognitive style was just a couple weeks ago, at the split-brain debate at NYU–so the topic is suddenly in the air (at least in NYC!).

    I would certainly expect there to be differences in cognitive style between R and L. Now…………… I have two qualms about saying this.

    The first qualm is that I think there has been a historic tendency to exaggerate hemispheric differences. This tendency seems partly rooted in a larger tendency to dichotomize generally, plus other interesting elements of our thought about human beings specifically (e.g. the tendency to view people as having public vs. private selves). It’s a topic I engage with in the last chapter of the book, where I note that even laypeople who haven’t heard of the split-brain phenomenon have heard “right brain/left brain” pop psychologizing that the split-brain phenomenon was very mixed up with (in the popular media) in the 1970’s. And I think there have been clear cases in which bold claims were made about a hemispheric difference in cognitive style and then it later turned out that something a little less subtle or sexy than style was involved. (In particular there have been a couple times when it’s been claimed that the LH has a tendency to cognize in some uniquely human way–e.g., to theorize rather than simply describe–and then later when a non-verbal test of this capacity is developed, the RH is found to have the same tendency.)

    The second qualm is that I haven’t really *deliberately* looked into the topic of hemispheric differences (though obviously one can’t avoid them in studying the split-brain phenomenon). So one could question whether my confident expectation that there are hemispheric differences in cognitive style is in fact adequately informed or is instead the product of the same tendencies I just critique in the previous paragraph! Still–I take it that there are clear hemispheric differences in cognitive capacity (e.g. with respect to competence for syntax), and it seems clear that these will either affect or directly encompass differences in cognitive style.

    The reason I haven’t directly looked into the issue is that I don’t think it’s relevant to the “how many minds?” issue. (Which doesn’t mean that the former issue is less interesting; indeed, perhaps it’s more interesting, insofar as it may bear on questions of cognitive style generally, that is, even in non-split subjects, as you suggest.) A number of philosophers argued or assumed that the 2-minds or 2-thinkers claim could only be true if the hemispheres sustained something like substantially different personalities. I argue quite the opposite: the hemisphere systems could be exactly similar and *still* be distinct thinkers: the relevant question is how their mental states interact with each other.

    Actually I am sure that some readers will think that I don’t pay *enough* attention to hemispheric differences in the book. So in the latter half of the book I argue that although the split-brain subject, S, is composed of two thinkers, S is nonetheless one person. Well, one way to defend that claim would be to argue that R doesn’t have what it takes, cognitively, to be a person. Maybe for instance persons need to be able to weave a narrative–in words–in a way that R is incapable of doing (because of the aforementioned incompetence with respect to syntax). Now I wanted to offer what I see as a more interesting version of the 1-person argument, according to which, even if R and L each *intrinsically* have what it takes to be a person, they *nonetheless* fail to be distinct persons because of the way they relate to each other. For the purposes of that argument, it’s helpful to ignore or abstract away from R’s obvious linguistic deficits. But one could object that I am surely abstracting too much.

    But I would need to think more specifically about how, as you say, “hemispheric functional specificity…. [might] help us understand what the factors are that go into dictating cognitive style”. One of the funny things about split-brain research is that it’s difficult to use very straightforwardly to draw inferences about non-split psychology; you need a functional theory of the corpus callosum, too.

Comments are closed.