Qualing the EMT

When I first heard about the extended-mind thesis (EMT), some time in the mid-2000s, I was instantly intrigued—mainly because it feels so intuitively right. Driving my car, I often feel that I am my car, or that my car is me. Driving a rental car, especially as I pull it out of the rental company’s lot, I always feel that I’m maneuvering not my own body but a big piece of alien machinery around. It feels to me as if I not only write better but think better when I’m working on my own computers or mobile devices than I do on an alien machine. My mind or my soul or my bodily proprioception has extended to incorporate the “machines” (it seems heartless to call them that) with which I interact intimately every day.

But when I started reading the EMT (and anti-EMT) literature, I was nonplussed. The thrust of the pro-EMT advocacy was that mind really extends: it’s not just a feeling, not just a quale, but a literal or material extension. As Andy Clark writes in the preface to Supersizing the Mind: “Such body- and world-involving cycles are best understood, or so I shall argue, as quite literally extending the machinery of mind out into the world” (2008: xxvi). And the anti-EMT literature basically insisted that if it can’t be shown that mind literally or materially extends—if it only feels as if it does—then mind doesn’t extend.

This seemed like a wrong turn to me. What is “the machinery of the mind”? Obviously it’s a metaphor: not only is the mind not literally mechanical; the brain isn’t either. Never mind, even, how one literally extends a metaphor; what is the literal reality figured by the metaphor? Cognitive functions, presumably—not synaptic action, neurotransmitter uptake, etc. But what does it mean for cognitive functions to extend literally? Given that Clark is already rather egregiously misusing the adverb “literally” in applying it to a metaphor, should we assume that he is using the word colloquially, the way a lot of people do, as a kind of intensifier, to signify his strong affective commitment to this idea? “After pulling an all-nighter, I arrived at the exam literally dead”—“literally dead” there meaning not “lifeless” but “more than tired, more than exhausted: really really really tired.” We could surmise that Clark means “quite literally extending the machinery of mind out into the world” in much the same spirit: “it’s not just that it feels like my mind is extending out into the world; it really really really feels like it!”

But then he goes on to say that this “literal” extension of “the machinery of the mind” builds “extended cognitive circuits that are themselves the minimal material bases for important aspects of human thought and reason”: material bases. What are the material bases of mind? Neurons? Surely “cognitive circuits” is another metaphor (cognition as electricity), but somehow, through the intensification achieved through the use of the adverb “literally,” it gets transmogrified into “material bases.”

And the problem is that anything short of material extension just won’t cut it. If mind doesn’t materially extend, it doesn’t extend. On that point Clark agrees with his critics. The mere feeling that mind extends isn’t good enough. Relying on feeling is “falling into the qualia trap.”

Despite my initial enthusiasm for the EMT, then, the utter implausibility of Clark’s materialist arguments for it inclined me to join the anti-EMT group—except that their arguments seemed no more persuasive than Clark’s. Fred Adams and Ken Aizawa, Clark’s most persistent opponents, weren’t content to say “if nothing material extends, mind doesn’t extend”; they had to drag Jerry Fodor’s LOTH and Fred Dretske’s naturalized semantics into the fray, and claim that “thoughts have nonderived semantic content, but that natural language has merely derived content” (2008: 34). As a reader steeped in a very different philosophy of language—the ordinary-language philosophy of Wittgenstein and Austin, the work of Mead, Bakhtin, Burke, Derrida, etc.—I again found this notion implausible to the point of absurdity. Speaking a language doesn’t help us think? Thinking itself isn’t a form of internalized conversation, steeped in transcraniality? Speech acts aren’t performed by whole groups?

My book Feeling Extended: Sociality as Extended Body-Becoming-Mind, recently published by MIT Press, is my attempt to navigate a middle course between the implausibilities of Clark’s Scylla and the absurdities of Adams and Aizawa’s Charybdis. My original subtitle was in fact Speech Acts, Empathy, and Face as Extended Body-Becoming-Mind, but I decided that that was at once too long and not long enough: it should have included ritual as well. “Sociality” is a bit bland; but it covers the territory, and is a long-neglected subfield that is beginning to get some traction in cognitive philosophy (see e.g. Sterelny 2012).

My core argument in the book looks like this:

[1]  Qualia are primary (extended mind feels extended) and shared (reticulated through the group) (§0.2 and §1.0, Chapter 4, Appendix)

[2]  Cognition is internalized conversation: intracraniality is mostly transcranial (§0.3 and §2.3)

[3]  Even verbal labels emerge out of preconative affect (§0.4 and §2.3)

[4]  Language is not all cognitive labels: speech acts are conative force (§0.5, §3.1.2, §3.2, §3.3)

[5]  Indirect speech acts can be preverbal (§0.6 and §3.4)

[6]  Empathy, face, and ritual are managed through affective-becoming-conative communication (§0.7, Chapter 5)

[7]  The extended mind is actually an extended body-becoming-mind (§0.8)

Note the emphasis on affect and conation: they are my attempt to map out the middle ground that Clark neglects to explore between moving bodies and extending minds. If it only feels as if mind extends, but the feeling that mind extends can itself be shown not only to extend but to exert conative pressure on others to organize their behavior around group norms, then what Clark derogates as “the qualia trap” may in fact be the solution he has been seeking.

The book also reprints an article I published online in Minerva a few years ago, entitled “Liar-Paradox Monism”: http://www.minerva.mic.ul.ie/Vol14/Monism.pdf.

References

  • Adams, Frederick, and Kenneth Aizawa. (2008). The bounds of cognition. Oxford: Wiley-Blackwell.
  • Clark, Andy. (2008). Supersizing the mind: Embodiment, action, and cognitive extension. Oxford and New York: Oxford University Press.
  • Sterelny, Kim. (2012). The evolved apprentice: How evolution made humans unique. Cambridge: MIT Press/Bradford.
0

24 Comments

  1. Hi Douglas,

    Perhaps I’m naive, and this is certainly not an area I know a whole lot about, but I don’t see why it is absurd to say that the mind literally — literally literally — extends in some cases. If I use a pencil and paper to assist my thinking, can’t I say that I am using them as an auxiliary form of information storage and symbol manipulation, and that those are cognitive operations? As I understand it, that’s the sort of thing that Andy Clark was saying. I can see a number of ways in which that approach might lead to trouble, but I can’t see anything obviously absurd about it.

    Best regards, Bill

    0
  2. Douglas Robinson

    Bill,

    Thing is, “extending” is a metaphor too: physical things literally extend; mind only metaphorically extends. When Clark says that the machinery of mind extends into new cognitive circuits–invoking three metaphors, “machinery,” “extends,” “circuits”–and then wants to insist that all that happens LITERALLY, I want to reply that that’s just a plain category error. Or, at best, it’s a description of an event that we simply don’t understand.

    The problem as I see it is that it’s never clear from the EMT debate what mind is. Clark and his opponents agree that it’s a collection or a channel of cognitive functions–but what does that mean? Or rather, what is it precisely about that conception of mind that plausibly allows for extension?

    I say that mind is qualia, and therefore that saying that mind extends is saying that qualia extend. I offer what I take to be a plausible neurological explanation for this extension: extension is a metaphor for a simulation created by what Antonio Damasio calls the “as-if body loop,” which passes through the mirror-neuron system. When my autonomic nervous system simulates qualia produced in and by (and as) your mind, it seems reasonable to trope that simulation as an “extension” of your mind.

    But then Andy Clark disallows qualia. That’s my problem with his argument. If you disallow qualia, you disallow the only plausible explanation for how mind actually might be troped as extending. He wants to disallow qualia because then it seems like he’s saying “my mind doesn’t REALLY extend; it just FEELS like it does.” But he doesn’t know where else to go with his theory. He doesn’t know another way of understanding mind so as to render it plausibly extendable–so he retreats into functionalist analogies. My qualia-based model solves that problem. But in order for Clark even to begin to consider my model, he has to grant that qualia are not just solipsistic wisps, but may actually be a transferrable channel of understanding and interpersonal pressure.

    0
  3. Hi Bil, I wonder why you think talk of circuitry is metaphorical? The Hodgkin-Huxley equations really do treat neurons as circuits (see here: http://en.wikipedia.org/wiki/Hodgkin–Huxley_model ). These can be built into larger circuits that do more complicated information processing. Also there are people who think that the brain is literally mechanical (in the sense of employing mechanisms). The idea is supposed to be that anything that functions in the same way would being doing the same job as this neuron. The extended mind thesis is that there are things outside the skull that can do the same job as some functionally equivalent set of neurons and so that the external thing has become part of the mind. You have a system in the same functional state as someone who is doing it all with neurons but in this case part of the realization base is outside the head. I don’t think I see any metaphors here.

    This connects up with some of the other things you say. I have always sort of thought that people like Chalmers (and maybe Clark) are committed to extended qualia. If you think that whenever a certain function is being computed then a basic law of nature makes it the case that a certain kind of qualia is instantiated then it should not matter whether this function is computed entirely in the head (in the same way it doesn’t matter whether it is computed in a brain or in a silicon computer). That would not be a case of it merely feeling as though the mind were extended but a case where it feels that way because it is that way.

    0
    • Douglas Robinson

      Hi Richard,

      Doug here. We could argue all day over metaphors; to my mind a machine is mechanical, and anything that is not a machine is only mechanical in a displaced (metaphorical) sense. Same thing with circuitry. A circuit is actually literally a circle, or other geometric shape that closes the loop. An electrical circuit has to close the loop to function; neural systems don’t. Hence, they are only metaphorical circuits.

      The functionalist argument for extension is one that I have very little patience for, sorry. I think it’s a fall-back move: if you can’t show that mind actually extends, you argue that an inanimate object that you’re using as part of your embodied cognition is functioning like a part of your mind, therefore IS part of your mind. If qualia can be shown to “extend”–to be shared transcranially–why fall back on that kind of indirect argument?

      As for Chalmers, I’ve often thought that he is probably open to extended qualia as well; but Clark, no. He has made it very clear that he’s going to stay away from the “qualia trap.”

      “It feels that way because it is that way”: the problem is the epistemological one. How would you KNOW that it “is” that way? Aren’t you arguing circularly, suggesting that it probably is that way BECAUSE it feels that way?

      Doug

      0
  4. Ken

    “Thinking itself isn’t a form of internalized conversation, steeped in transcraniality?”

    There is some evidence that humans can think without internalized conversational capacities. Language-less adults are not human vegetables. They appear to think. But, there also seems to be good reason to think that they do not have internalized conversational capacities, because they do not have external conversational capacities. Maybe this is wrong, but absurd?

    0
    • Douglas Robinson

      Interesting point, Ken!

      I’d have to say, though, that thinking as internalized conversation isn’t necessarily entirely linguistic. Conversation is often wordless–as when your significant other speaks worlds with a look–and even when it is accompanied by words, nonverbal communication experts estimate that as much as 93% of communication is nonverbal. I don’t know much about “language-less adults,” but I’d guess they can pick up on body language.

      0
        • Douglas Robinson

          Ken,

          I’m not an expert, but I’m guessing children raised by wolves understand wolf body language extremely well. Human body language would be a mystery to them. But I’m not sure what the implication of your question is. Is it that you think I’m taking the ability to read body language (or generally understand language) is somehow innate? Obviously, individuals LEARN to communicate from the group–whatever group they live with.

          Doug

          0
          • Ken

            Where I am going with this is that it seems to me that you are taking rather uncritically this idea that we think in conversation. That the rejection of this is absurd. It sounds as though you just assume that children raised by wolves will learn wolf body language and that they will not have an innate capacity for knowing human body language. That sounds like something that would take a lot of experimental work to ascertain and not something that Meade, for example, would have known about.

            Why think that learning wolf body language, if that is possible, would suffice for human levels of thinking? Do wolves really have a sufficiently robust system of body language that would enable a human being to achieve relatively normal levels of intelligence?

            0
          • Douglas Robinson

            I’m frankly completely bamboozled by this. It reads to me like a string of non sequiturs: (1) that I assume that we think in conversation, (2) that it is absurd to reject this notion, (3) that it’s problematic to assume that a human child raised by wolves would learn wolf body language, (4) that an innate capacity for “knowing human language” is somehow relevant to a discussion of a child raised by wolves, (5) that imagining a human child learning wolf body language somehow entails the supposition that s/he will develop human levels of thinking/intelligence. I’m sure there’s a rational chain of thought behind all this; it’s just powered by such an alien philosophy of language and intelligence that it makes no sense at all to me. Literally. To coin a phrase.

            (1) I don’t assume that we think in conversation. I never said anything like that. I said that thinking is internalized conversation. That does not entail that we always think, or that we always converse, or that we always think when we converse. It entails that we think because we have conversed, or, more broadly, communicated, with others.

            (2) I have no idea what “this” in “That the rejection of this is absurd” refers to.

            (3) I wasn’t raised by wolves, and as I specifically said, I’m not an expert on such matters. I said I’m guessing. But I’m guessing based on the fact that pet owners learn the body language of their pets. Horse trainers learn the body language of horses. Yes, I’m “just assuming” that a child raised by wolves would learn wolf body language. I’m “just assuming” that learning to communicate at some level with one’s community is essential for survival. Again, I can’t imagine what sort of philosophical orientation would make it seem reasonable to deny what to me is such an obvious assumption.

            (4) A child who had no exposure to human language at all would certainly have an innate capacity to learn human language. But so would s/he have an innate capacity to learn to play poker. The fact that the child raised by wolves had no exposure to either human language or poker would make the issue of an “innate capacity” to learn either completely irrelevant to the discussion. We’re talking about how individuals learn to communicate, not about what they might learn in other circumstances, in other communities.

            (5) The leap from “learning wolf body language” to “achieving relatively normal levels of intelligence” is the most baffling non sequitur of all. What on earth do “relatively normal levels of intelligence” have to do with anything here? The issue is not “relatively normal levels of intelligence”; the issue is whether thinking is internalized conversation. If a child raised by wolves achieves wolf intelligence by internalizing wolf conversation—for example, the kind of affective communication conveyed by body language—that is an instance of thinking as internalized conversation. “Normal levels” is a red herring.

            0
  5. Carl Gillett

    Hi Doug, I obviously have not read your book given its newness, so I am going from your brief comments. So I would be grateful for some clarification of claim (1) in a number of ways.

    First, could you say a little more about what qualia are for you. In particular, what is the relation of qualia to spatial locations — do qualia occupy or have spatial locations around my body, or do they represent (in a phenomela way perhaps) those location? (Or do they do both?)

    I am not sure I can get a grip on the argument till I get a grip on what you think about that. I have a few more questions, but let me start with that, best, Carl

    0
  6. Douglas Robinson

    Carl,

    Since qualia were first theorized by Peirce, I use Peirce’s concept of the interpretant to understand them better. It’s a long, elaborate argument; rather than rehearse it here, let me give a few examples.

    I just wrote to Ken about the significant LOOK that your significant other gives you. How do you understand that? It’s a momentary event, as Peirce says all qualia are; but it’s obviously shared transcranially. I argue, drawing on the social neurology of empathy, that we understand it by feeling it: our autonomic nervous systems simulate the affect powering the look-as-quale.

    I yawn. You yawn–or else have to suppress a powerful impulse to yawn. My yawn is a quale; why do you feel the mimetic impulse so strongly?

    I tell a story about riding my bike on a hot summer day in shorts and a t-shirt, and my front wheel dropping into a rain grate, so that my bike flips me over forwards, and I skid along the hot asphalt on the bare skin of my face, arms, and legs. You shudder. Why? (Neurologists have studied this phenomenon, and argue over whether you feel affective pain only, or whether your body also simulates the physical pain, so that you actually “feel my pain.” In either case, you’re experiencing some kind of internalized simulacrum of my pain quale, based on my verbal account alone.)

    I know that spatial qualia were theorized early on, by psychologists in the late 19th century, but I don’t know much about them. My thinking about qualia was heavily shaped by Galen Strawson’s book Mental Reality: qualitative experiences.

    Doug

    0
    • Carl Gillett

      Hi Doug, thanks for the response which is again very helpful about where you are coming from and going to.

      From what you are saying it sounds as if qualia for you are not located in space-time, but represent spatial and temporal locations. Correct me if I have gotten that wrong.

      But now onto the “transcranial sharing” of qualia, since that could be understood in two ways — one of which conflicts with the assumption above and one of which does not.

      The non-located qualia compatible interpretation of “sharing” is that, for example with the yawn case, you and I both have quale tokens of the same type, each of which represents someone else and certain locations and appears to a disposition to make me act a certain way (ie yawn). So we share the unlocated quale in the same way that the car and book share redness.

      Is that right?

      Or do you endorse one quale token which you and I each have? Or some other view?

      Thanks for answering these questions — I hope it is compensation for your work that you persuaded me to read your book! I look forward to your other posts here, very best, Carl

      0
      • Douglas Robinson

        Carl,

        Thanks for your thoughtful questions! And I’m very glad to hear you’re planning to read the book!

        The problem I have with the notion of a “quale token” is that the so-called “type-token” distinction is a platonizing inversion of Peirce’s tone-token-type triad. Referring to a quale as a token would seem to me to imply that the quale is a type and any given quale is a token of that type–right? Following Peirce, I would start at the other end of the triad: a quale (or a qualisign) is a tone, a First, completely unrelated to any other quale, completely sui generis. A shared quale would be a Second, enriched by engagement with others; but “token” seems completely wrong for the kind of Secondness it enters. In the book I liken the quale to an interpretant instead. The lone quale would be an emotional interpretant as First, the shared quale an energetic quale as Second, and the understanding that results a logical quale as Third. (That last isn’t in the book; I’m spit-balling.)

        My understanding of the “sharing” of qualia is neurological rather than logical: since the car and the book don’t have autonomic nervous systems that can SIMULATE each other’s qualia, I resist the likeness!

        Best, Doug

        0
  7. Ken

    I don’t see how you get this: “if nothing material extends, mind doesn’t extend” from anything I’ve written. Where is this coming from?

    I also don’t see how Adams and I are committed to any of this in virtue of the arguments we’ve given against EC or in virtue of the challenges we have posed to the principal EC arguments:
    Speaking a language doesn’t help us think?

    Thinking itself isn’t a form of internalized conversation, steeped in transcraniality?

    Speech acts aren’t performed by whole groups?

    0
    • Douglas Robinson

      Ken,

      Oh, you’re Ken Aizawa? I didn’t realize! Obviously I’m simplifying your arguments against the EMT or EC for the purposes of blogtalk; I deal with your objections to Clark at considerable length (two chapters) in the book, and don’t quite know how to reduce the complexity of that argument to a simple enough form for this kind of forum without caricaturing. The trajectory of the argument, though, is: a critique of Fodor’s LOTH (which you don’t specifically mention in your attacks on the EMT) and Dretske’s naturalized semantics (which you do); a critique of the theory that language is verbal labels (which Clark accepts, I suggest to the detriment of the EMT); a sorites series showing how the logic of “derived” vs. “nonderived” representations in natural language is fuzzy rather than binary; and a long discussion of speech acts as transcranially shared affect and conation.

      Best, Doug

      0
      • Douglas Robinson

        Ken,

        Forgot to mention: “if nothing material extends, mind doesn’t extend” is blogtalk for “mind develops a combination of derived and nonderived representations; if a tool develops only derived representations, it is only COUPLED to mind.” The “material” part was my impatience. (And that wasn’t, obviously, a direct quote from you. I’m rushing out the door to work and didn’t have time to get an exact quotation.)

        Best, Doug

        0
        • Ken

          I recognize the claims about non-derived content, but I don’t see how this is anything like the claim that “if nothing material extends, mind doesn’t extend.” It looks like the antecedent is false, so that this might be a vacuously true conditional.

          0
  8. Ken

    ” But I’m guessing based on the fact that pet owners learn the body language of their pets. Horse trainers learn the body language of horses. Yes, I’m “just assuming” that a child raised by wolves would learn wolf body language. I’m “just assuming” that learning to communicate at some level with one’s community is essential for survival. Again, I can’t imagine what sort of philosophical orientation would make it seem reasonable to deny what to me is such an obvious assumption.”

    I would say that my philosophical orientation broadly aspires to be scientifically informed. So, I try not to overrate casual observations. So, for example, domesticated animals might well be a special case where we are better able to interact and communicate with them than with other non-domesticated animals, such as wolves. From the little bit I know about animal communication, there can be lots of forms of communication that a child in the wilderness will not pick up on. Why assume that a child raised by wolves will pick up on the full range of wolf animal behavior that constitutes what one might call wolf body language. Such a child might learn a lot about wolf body language or barely scratch the surface of wolf body language. Maybe there are vary subtle behaviors that figure in to establishing dominance hierarchies and alliances among wolves and a child might not pick up on that. I know that many highly educated primatologists have spent a lot of effort trying to discern such things among primates. I know that there is a lot of effort dedicated to understanding how crows interact. Maybe it is so easy that a child could figure it out. But, maybe it is not.

    0
    • Douglas Robinson

      One more time, and then I stop responding to irrelevancies: “the full range of wolf animal behavior that constitutes what one might call wolf body language” is not relevant to this discussion. The discussion is not about perfect understanding; it’s about whether ANYTHING is internalized from other individuals, transcranially, and whether what is thus transferred contributes to thinking AT ANY LEVEL. If the child raised by wolves picks up ANY ASPECT of wolf body language, and that transcranial internalization contributes to the child’s thinking AT ANY LEVEL, that to my mind constitutes counterevidence against your claim that language carries only derived representations. If you want to argue that point, I’ll continue. If you keep deliberately obfuscating that issue with red herrings, I’ll simply stop responding.

      0
      • Ken

        So, maybe the thing to do would be for you to explain to me how this works.
        You write, “it’s about whether ANYTHING is internalized from other individuals, transcranially, and whether what is thus transferred contributes to thinking” So, suppose that the child internalizes only one thing, namely, that wolves bare their teeth to ward off others. Now you claim that “Cognition is internalized conversation:” That one thing internalized about wolves would seem to provide for only a very limited conversation, hence only a very limited range of cognition. Much more limited that that displayed by children raised by wolves.

        This is why it seems to me to be relevant how much body language children might learn from wolves. And, why we should not just guess. It looks to be an empirical questions that would, in fact, be quite difficult to answer.

        0
        • Douglas Robinson

          You and Fred Adams insist that language carries only derived representations. In the book (p. 74) I set up a syllogistic reduction of what I take to be your argument, as a putative Adams&Aizawa refutation of a Verizon Wireless Test Man scenario I set up, in which the Test Man and whoever he’s talking to back at the head office constitute extended mind:

          (begin quotation)
          [1] The Verizon Wireless Test Man and his interlocutor work together over the phone to determine reception strength at a series of locations.
          [2] Some cognitivists might characterize their collaboration as an example of extended cognition.
          [3] Cognition consists of thoughts (nonderived representations) and processes (remembering, paying attention, perceiving, verbalizing, and so on) whose origin and destination are intracranial.
          [4] Transcranial human communication proceeds by means of language or other sign systems capable of bearing only derived content (no thoughts, only secondary representations of thoughts) and lacking shared processes for remembering, paying attention, perceiving, verbalizing, and so on.
          [5] The telephone interactions between the Verizon Wireless Test Man and his interlocutor offer examples of coupled cognition (two separate cognitive agents interacting) without constituting extended cognition.
          (end quotation)

          And I comment:

          (begin quotation)
          The weak link in that syllogism, I suggest, is major premise (4); in a sense this entire book is a series of critiques of the thinking behind it. Transcranial human communication, Adams and Aizawa insist, is “merely derived” and therefore noncognitive; as they themselves recognize, this solution is “not entirely unproblematic” (ibid: 34). The philosophical tradition from which they borrow the solution is not merely the naturalized semantics (causal model of mental content, reductive representationalism) that they specifically mention, but more broadly the work coming out of Jerry Fodor’s (1975) language-of-thought hypothesis; it is to that tradition that we look next.
          (end quotation)

          As I understand it, the burden of proof is on you to show that NO nonderived representations are ever transcranial. If I can show that ONE nonderived representation is transcranial, no matter how tiny, no matter how insignificant–so long as it is arguably cognitive–your assertion is disproved.

          I don’t have to show that the child raised by wolves learns to communicate perfectly with wolves. All I have to show is that the child learns to think even a tiny bit, even to PREFER one type of food over another, by internalizing one tiny shred of wolf body language. If a wolf growls at the child and the child from then on feels that growl internally (this is the case I make), and the internalized growl guides the child’s decision-making, even in minuscule ways, mind extends.

          To disprove my assertion, what you have to argue is not that the child isn’t learning to communicate perfectly with wolves, but that the child isn’t internalizing anything: s/he is INTERPRETING the wolf’s body language, intracranially.

          As for science, excuse me if I don’t follow you: “In addition to our hypothesis that cognition involves non-derived representations, we want to advance the further empirical hypothesis that, as a matter of contingent empirical fact, non-derived representations happen to occur these days only in nervous systems. Insofar as this latter hypothesis is true, we have some non-question-begging, defeasible reason to think that, contrary to what the advocates of extended cognition propose, cognitive processing is by and large to be found within the brain” (Adams&Aizawa 2008: 55).

          All I have to do to force you to set the talk of scientific proof to one side is to invoke this same “empirical hypothesis” argument: it is my empirical hypothesis that as a matter of contingent empirical fact, a child raised by wolves will internalize affective-becoming-conative impulses from the wolves that guide his or her decision-making, and so constitute extended mind. I devote a long section (5.1) to a review of the literature in the social neurology of empathy that to my mind confirms my empirical hypothesis; but, you know, whatever.

          0
          • Douglas Robinson

            To recap: we’re not arguing about the nature of communication. We’re arguing about whether mind extends. You and Fred Adams hypothesize that mind NEVER extends. It is sufficient to refute that hypothesis for me to prove that mind extends ONCE. If I could ever get you to admit to that once, you’d almost certainly want to shift your argumentative strategies so that that one extension is very weak, almost nonexistent, hardly even cognitive, probably not cognitive at all, and so on. But for right now, that single case is sufficient disproof.

            Now in the book I admit that there are two current scientific hypotheses about how we feel other people’s (and wolves’) affects and conations:

            [1] One is simulatory: the mirror neuron system simulates the other individual’s body states. The social neurologists studying empathy have done extensive empirical testing of that system in response to visually and aurally perceived body language.

            [a] One of the salient facts about the mirror neurons is that they seem to be incapable of distinguishing between an event in the host body and an event in another body: they fire in the same way in response to both. The result is that the qualitative experience of a shared affect or conation is virtually identical to the experience of an intra-PNS affect or conation. This is the phenomenology of extended mind.

            [b] The other salient fact about them, though, is that they fire inside each brain separately, individually. As I say in the book, this leaves you and Fred Adams an out: you can continue to claim that mind doesn’t extend, because the mental event is technically speaking still intracranial. The rapid synchronization of kinesthetic, affective, and conative guidance in groups, even large groups, is hard to explain if you reduce it all to conscious analytical interpretation; the operation of this system in ritual (including events like the opening ceremonies of the 2008 Beijing Olympics) to create what feels and looks like “one body, one mind” is the burden of my fifth chapter. But there are a few flimsy shreds of intracraniality to which diehard internalists can continue to cling, if they’re desperate enough.

            [2] The other current scientific hypothesis is that actual pheromones are transferred from body to body, carrying affect and conation. I don’t accept that hypothesis–pheromonal transfer takes too long, several seconds, whereas the transcranial sharing of affect and conation takes roughly 300 milliseconds. But if the pheromone hypothesis is confirmed empirically, then something material does in fact extend mind transcranially.

            Given that no one in the EMT debate as it has been conducted so far, since 1998, has raised these possibilities, I confess to reacting with considerable impatience to being told that I’m assuming something that would be difficult to confirm empirically–let alone that my invocation of material extension is a “vacuously true conditional.” I realize that you haven’t read my book, giving me a considerable advantage, and leaving you with only the simplified version of my arguments that I provided in my initial post, which left out all of the steps I’ve been explaining in detail here. But reminding myself of all that doesn’t change my affective response.

            0
  9. I’m really impressed with the quality (and intensity) of this discussion so far, but can we all make an attempt to keep the tone collegial and our comments as constructive as possible? I expect that would make the exchanges more productive.

    0

Comments are closed.