Thanks to John Schwenkler and The Brains Blog for giving me this space to write about the new, revised edition of my book, The Embodied Mind: Cognitive Science and Human Experience, co-authored with neuroscientist Francisco J. Varela (1946-2001) and psychologist Eleanor Rosch. The MIT Press first published the book in 1991; the revised edition, published in January 2017, leaves the original text unchanged (except for some minor corrections) but adds two new introductions, one by me and one by Eleanor Rosch, as well as a new foreword by Jon Kabat-Zinn.
Let me begin by telling you a little bit about how the book came to be written. Varela and I started working on it just over thirty years ago, in the summer of 1986, in Paris, so when I look at the book now, let alone try to write about it, I feel a kind of retrospective vertigo. Back then I was a first-year Ph.D. student in Philosophy at the University of Toronto and Varela had just moved to the Ecole Polytechnique and the Institut des Neurosciences.
We had met in 1977 when he came to a conference at the Lindisfarne Association, an educational institute and community founded by my parents, William Irwin Thompson and Gail Thompson, in Southampton, New York. My father and Gregory Bateson, who was our scholar in residence, led the conference, called “Mind and Nature.” (Bateson’s book, Mind and Nature: A Necessary Unity, was published in 1979.). Varela in turn was our scholar in residence in 1978 at Lindisfarne in Manhattan, where he finished his book, Principles of Biological Autonomy, while doing research at the NYU Brain Research Laboratories. Living together in Manhattan, he became a member of our family—a combination of uncle and older brother to me, as well as my intellectual mentor. That relationship was the context in which we worked together on The Embodied Mind in Paris from 1986 to 1989.
Varela had moved to Paris by way of the Max Planck Institute in Frankfurt (where he collaborated with neuroscientist Wolf Singer) in order to set up his lab investigating the neurophysiology of vision. I had graduated from Amherst College, where I had majored in Asian Studies, focusing on the Chinese language and Buddhist philosophy. I was about to write my Ph.D. dissertation at the University of Toronto in cognitive science and the philosophy of mind. Varela suggested that I use color vision science, specifically the comparative investigation of color vision in different animal species, as a lens for focusing on philosophical issues about perception. Color vision in birds was one of his main areas of experimental work at the time, so I learned color vision science and wrote my dissertation in his lab while we worked together on The Embodied Mind.
Eleanor Rosch joined us in 1989. I had moved to UC Berkeley, where I was a Visiting Postdoc in Philosophy, and where Rosch was a Professor of Psychology. (Varela and Rosch had been friends for many years.) The three of us finished the book in 1989-90.
Our central aim in the book was to create cross-fertilization between cognitive science and the phenomenology of human experience. Cognitive science was dominated by the computer model of the mind, which treated lived experience as epiphenomenal or irrelevant to understanding the workings of the mind, while phenomenology had retreated into the textual analysis of works by dead phenomenologists. Our approach was twofold. First, we reconceptualized cognition as a form of embodied action while reframing cognitive science as the investigation of embodied cognition. Second, we used Buddhist philosophy and accounts of Buddhist meditation practices to reinvigorate phenomenology. Out of this cross-fertilization came a new approach, which we called the “enactive approach.”
In the intervening years, many things have happened that make the book more immediately accessible than when it was first published. The embodied cognition research program is central to cognitive science and the enactive approach is well known. Whereas the dominant model of the brain in early cognitive science was that of a stimulus-driven, sequential processing computer, it’s now widely recognized that brain activity is largely self-organizing, nonlinear, rhythmic, parallel, and distributed. The idea that there is a deep continuity in the principles of self-organization from the simplest living things to more complex cognitive beings is a mainstay of theoretical biology and neuroscience. Subjective experience and consciousness, once taboo subjects for cognitive science, are important research topics in the philosophy of mind and cognitive neuroscience. Phenomenology in the tradition of Husserl and Merleau-Ponty plays an active role in the philosophy of mind and informs areas of experimental cognitive science. Meditation practices are increasingly used in clinical contexts and are a growing subject of investigation in behavioral psychology and neuroscience. And Buddhist philosophy is recognized as an important philosophical tradition and interlocutor in contemporary philosophy. None of this was true when we were working on The Embodied Mind. It’s fair to say, I think, that the book has had a hand in these advances.
Three decades later, I see the effort to create a new kind of relationship between science and human experience as the book’s original and lasting contribution. We argued for enlarging cognitive science to include transformative experiences of the self and the world, and for enlarging human experience to include insights from cognitive science. This back and forth “circulation” between cognitive science and human experience is the heart and soul of the book. It makes the book “about something real,” to borrow the opening words of Eleanor Rosch’s new Introduction, while also making the book “not fit easily into any of the usual academic disciplines.” It’s also responsible, I believe, for the book’s continuing influence in the study of embodied cognition—not just in cognitive science, but also in the arts and the humanities, as well as in somatics and the bodywork disciplines.
At the same time, when I reread the book now I can’t help but see it as limited by several shortcomings, ones that have become increasingly apparent to me over the years and that need to be left behind in order to advance the book’s vision. I’ll talk about these shortcomings in my next post.
What an interesting story, very well told. I worry that the phenomenology of phenomenal properties, such as colour, will not go far enough. I would like more efforts to reconceptualize the physical stimulus, as well as the physiological response and the phenomenology or psychology itself. J.J. Gibson wrote about “higher-level invariants”, and my hope is that we will find many many more of these in much more interesting systems than has been so far conceptualized of in our philosophy. I am looking forward to Evan’s latest contributions.
Thanks, Jonathan. Color perception is one of our central examples of enaction in The Embodied Mind, and I develop the view further in my 1995 book, Colour Vision. There I use Gibson to develop a relational and ecological account of color. Whether this kind of account can be extended to “higher-level invariants” is an interesting question. I tried to do so for a range of phenomena in my book, Mind in Life (e.g, temporal experience, emotion, and empathy and intersubjectivity), though not in strictly Gibsonian terms. I’m currently working on ways we might think of concepts and categories in this way, too, drawing from cognitive science and classical Indian Buddhist philosophy.
“we reconceptualized cognition as a form of embodied action” How does what you call “cognition” relate to what cognitivists call “performance”?
In the book, we didn’t relate the idea of cognition as embodied action to the performance/competence distinction. Embodied cognition theorists generally avoid this distinction. Nevertheless, in the book we used “embodied action” to refer mainly to the exercising of capacities for skillful, situated action (cognitive performance), and “sensorimotor networks” to refer to what (on the organism side of the organism-enviroment system) enables the production of situated action (competence).
Thanks for sharing, Evan.
“Whereas the dominant model of the brain in early cognitive science was that of a stimulus-driven, sequential processing computer, it’s now widely recognized that brain activity is largely self-organizing, nonlinear, rhythmic, parallel, and distributed.”
FWIW, this is a false contrast. The same system can be a computer, stimulus-driven, sequential (at a high level of organization), parallel (at lower levels of organization), self-organizing, nonlinear, rhythmic, and distributed.
I disagree that it’s a false contrast. A system that is self-organizing in the sense we talked about in The Embodied Mind (and that’s now much more widespread) isn’t stimulus-driven. Stimuli modulate but don’t drive. Also, historically and methodologically, treating the brain as a stimulus-driven machine (e.g., the labelled line theory) prevented appreciating and investigating the brain in terms of large-scale patterns of endogenously generated and self-organized activity, which strongly affects how stimuli are received and what they mean for the system.
Thanks for your response. Are you suggesting that computers are stimulus-driven but brains are stimulus-modulated, and that this is a substantive difference between them (as opposed to using two labels for the same effect)? Without further clarification on how you distinguish between driving and modulating in this context, this sounds like a merely verbal maneuver.
Also, I don’t see what this has to do with the labelled line theory, which is an plausible theory of many sensory systems (except probably taste and olfaction), or how the labelled line theory could have prevented appreciating and investigating large-scale patterns of brain activity.
Thanks for your follow up.
I don’t mean to say that computational systems are necessarily stimulus-driven; that depends on the system. I take your original point about the conceptual possiblity of a system’s being differently describable at different levels. Even so, it may be much harder to work this out in practice for any real system (e.g., there are difficult questions about exactly what’s required for genuine multiple realizability). I worry that your “false contrast” characterization runs the risk of underestimating the difficulties here. For example, given that brain activity is largely endogenously generated, self-organzing, rhythmic, nonlinear, and nonstationary, in exactly what way is it also not merely describable but also explainable (if it is) as stimulus-driven and sequential?
Let me back up a bit and give some more context for what I said in the post that started this discussion. My point was that the way the brain was viewed as a computer in the early days of cognitive science treated it as if it were a stimulus-driven machine. The overall contrast I meant to draw attention to was between the way the classical computational model of mind (or GOFAI if you like) viewed the brain and how we’re now inclined to view it. In earlier days, the brain was typically described as transducing physical signals into syntactic structures that were computed as discrete units in a feedforward and sequential way. The connection to labeled lines is that the “instantiation function” (Pylyshyn’s term) that was supposed to take us from syntactic processes to neurophysiological processes often appealed to labeled lines—to to the idea that activity in any individual neuron unambiguously represents a particular stimulus feature, and does so independent of the amount or type of activity in other neurons—as well as to the idea of (what we now call) classical receptive fields—that sensory neurons code for specific attributes in specific peripheral areas. So when neurobiologists (notably Freeman in his 1975 book, Mass Action in the Nervous System, and Maturana and Varela, and Llinás, and others) and neural network theorists (e.g., Grossberg) challenged this way of understanding neuronal activity, and more generally challenged this way of trying to mesh the brain as a biological organ with the brain as computational system, they with a lot of resistance from mainstream cognitivism. These theories made individual neuronal activity much more a matter of the large-scale dynamical context of brain activity, in ways that cut against the standard labeled line and classical receptive field models. Today this is all history for us—we’re familiar with population codes, spatiotemporal codes, nonclassical receptive fields, and with all sorts of neural network models in which there’s a complex back and forth between “top down” and “bottom up” activity. So this is one example of what I meant when I wrote that some of things we talked about in The Embodied Mind are much more accessible today.
Evan, thanks for the clarification. A few additional comments: (1) I still don’t know what you mean by “stimulus-driven” versus “stimulus-modulated,” (2) changes in our understanding of how the brain works had little to do with GOFAI (GOFAI people rarely cared about the details of how the brain works), and (3) GOFAI is only one way to think about computation in the brain among others; mainstream cognitive neuroscience is more wedded to computationalism today than it was during the heyday of GOFAI.
PS: digital computers have all of the above characteristics at least in some respects (though that’s not enough to make them very similar to brains; the devil is in the details).
Gualtiero, sorry for taking so long to respond to your follow up.
By “stimulus driven” versus “modulated” I mean the difference between evoked neural activity and spontaneous neural activity. Evoked neural activity is locked to the stimulus and occurs in well-controlled, task situations. Spontaneous neural activity is not stimulus-locked and is always richly present (not just in so-called resting states). It creates the context in which stimulus are received and become meaningful. An example is the way ongoing neuronal oscillations at different frequency bands shape whether two stimuli with the same inter-stimulus-interval are perceived as simultaneous or sequential. See this article for a recent review: https://www.cell.com/trends/cognitive-sciences/abstract/S1364-6613(16)30104-8
I agree with your points (2) and (3). Regarding (2), I wasn’t talking so much about GOFAI per se as about a certain way of looking at the brain that shared elements with GOFAI, both views being part of an overall scientific climate. Regarding (3), I don’t disagree, but I would want to raise the question of what exactly “computation” means and point to the need to be clear abou the difference between using computational tools to analyze the brain and giving a theory of the brain as a computational system.
Thanks to Evan for sharing this fascinating story. *The Embodied Mind* has been dogeared on my bookshelf (read and reread) for 20 years. Looking forward to the new edition.
Although Bateson’s *Mind and Nature* was information-based and did not include first person phenomenological views, I wonder to what extent you and Francisco saw his work as antecedent and influential to your book and also, what are Bateson’s lasting contributions to the embodied mind approach?
Bateson’s views were influential on Francisco in the early years of his work in the 1970s. At that time he talked about Maturana, Bateson, and Piaget (the Piaget of Biology and Knowledge) as especially important. He also had studied Heidegger carefully, so phenomenology was important from early on, but it didn’t start to come out in his writings until we wrote The Embodied Mind.
I see Bateson’s work as of lasting importance. There are very few thinkers who manage to cut across anthropology, biology, ecology, and systems theory in the way he did and with his kind of insight. For theorists who work on embedded and extended cognition, which I see as inseparable from the embodied approach, his insights are central. (I’m thinking, for example, of Edwin Hutchins’s work.)
I’m an architect, a longtime student of Merleau-Ponty, Varela, psychophysics, cognitive science and color. I’ve been wondering recently how, exactly, an autopoietic building might work. Autopoiesis (from 1980) is a machine that “continuously generates and specifies its own organization through its operation as a system of production of its own components, and does this in an endless turnover of components under conditions of continuous perturbations and compensation of perturbations” [p. 78]. We are nowhere near being able to realize this goal in autonomous (even experimental) structures, or so I believe. What do you think are the autopoietic opportunities for architecture?
A related question: what do you think of the possibility of defining the unity (organism) at the core of autopoiesis as a network of relations between operator and building and by this path defining an entity that might be evaluated in terms of non-linear dynamical systems?
The related philosophical issue, as I imagine it: is it a legitimate move to identify systems at different scales as autopoietic when components of the system under consideration are heterogeneous mixtures of organic unities and non-organic heteropoietic (is this the correct term?) objects?
Thanks, Jack. An autopoietic building is a very interesting idea. I would want to distinguish between autopoiesis in the strict sense—a bounded, self-producing system, where the boundary is like a membrane—and an autonomous system, that is, a precarious system with operational closure (see this article by me and Ezequiel Di Paolo for more on autonomy: https://evanthompsondotme.files.wordpress.com/2012/11/di-paolo-and-thompson-enactive-approach.pdf ). Autopoietic systems are autonomous but not every autonomous system achieves its autonomy through autopoiesis. So a more general way to put your question is whether there could be an autonomous building. I don’t see why not. Maybe there could even be an autopoeitic one. In either case, as you say, we are indeed a long way off from being able to synthesize or construct autonomous or autopoietic systems (though some synthetic bounded autocatalytic systems are arguably at least proto-autopoietic).
I don’t think I quite get your second question.
In response to your last point, I think a system could consist of heteropoietic entities that nonetheless subserve autopoiesis at a higher level—Maturana and Varela sometimes talked about multicellular organisms this way—and I don’t see why they couldn’t consist of nonorganic materials. (Prostheses are made of nonorganic materials but we incorporate them into our functioning as autonomous systems.)
evan, thanks for your thoughts.
i ask about the autopoietic opportunities for architecture because my suspicion is that the only way to realize a long-term, resource positive building is though (as you write) “a bounded, self-producing system, where the boundary is like a membrane—and an autonomous system, that is, a precarious system with operational closure.” thanks for the reference. cheers