Thanks to John Schwenkler for the invitation to guest-blog this week about our book Extended Consciousness and Predictive Processing: A Third-Wave View (Routledge, 2019):
Where does your (conscious) mind stop, and the rest of the world begin? We defend what has come to be called a “third-wave” account of the extended mind. In this post we will aim to give you a sense of what the debate is about within the extended mind community. Second we will sketch our third-wave interpretation of the predictive processing theory of the mind. We don’t consider how the third wave perspective on the extended mind might be criticised, an issue we take up in later posts.
The “wave” terminology is due to Sutton (2010), and is used to distinguish the following three lines of argument for the extended mind.
First-wave extended mind is good old fashioned role functionalism, associated either with common- sense functionalism (Clark & Chalmers 1998) or with psychofunctionalism (Wheeler 2010). First-wave theorists (as we saw in our first post) argued for the extension of minds into the world on the basis of the functional equivalence of elements located internally and externally to the individual. If those elements make similar causal contributions in the guidance of a person’s behaviour, they should be treated equally. More specifically, we shouldn’t exclude the external element from being a part of a person’s mind simply on the basis of their location outside of the biological body.
Second-wave arguments turn on the different but complementary functional contributions of tools and technologies as compared with the biological brain. Thus notional systems for doing mathematics for instance, complement the brain’s inner modes of processing, resulting in the transformation of the mathematical reasoning capacities of individuals, groups and lineages. Second-wave arguments may seem to suggest a picture in which internal neural processes have their own proprietary functional properties that get combined with public systems of representation such as systems of mathematical notation that likewise have their own fixed functional properties. Something genuinely new then emerges (e.g. capacities for mathematical reasoning) when these elements with their own self-standing functional properties get combined or functionally integrated.
Third-wave arguments are in agreement with the second-wave in taking material culture to be transformatory of what humans can do as thinkers. However, the third-wave view takes this transformatory process to be reciprocal and ongoing. Human minds continually form over time through the meshing together of embodied actions, material tools and technologies, and cultural norms for the usage of these tools and technologies. Individual agents are “dissolved into peculiar loci of coordination and coalescence among multiple structured media” (Sutton 2010, p.213). Control and coordination is distributed over and propagated through the media taken up in cultural patterns of activity. The constraints (the local rules) that govern the interactions between the components (internal and external) of extended cognitive systems need not all arise from within the biological organism. Some of the constraints may originate in social and cultural practices, in “the things people do in interaction with one another” (Hutchins 2011, p. 4). The boundaries separating the individual from its environment and from the collectives in which the individual participates are “hard won and fragile developmental and cultural achievements” (Sutton 2010, p. 213).
Unlike the previous waves in theorising the extended mind, a commitment to extended consciousness falls naturally out of third-wave arguments for the extended mind. We follow Susan Hurley in thinking of the material realisers of consciousness as extended dynamic singularities (Hurley 1998, 2010). Hurley uses this term to refer to a singularity in “the field of causal flows characterised through time by a tangle of multiple feedback loops of varying orbits”. Such causal flows form out of the organism’s looping cycles of perception and action, where the complex tangle of feedback loops is closed by the world. The extended dynamic singularity she says “is centred on the organism and moves around with it, but it does not have sharp boundaries” (Hurley 1998, p.2). We depart from Hurley however in allowing for a decentering of extended dynamic singularities. As the cognitive anthropologist Ed Hutchins notes, some systems “have a clear centre, while others have multiple centres, or no centre at all” (Hutchins 2011, p. 5). The propagation of activity across various media is coordinated by a ‘lightly equipped human’ working (sometimes) in groups, and always embedded in cultural practices.
Third-wave arguments thus highlight the need for rethinking the metaphysics within which arguments for extended minds are developed. Unlike the standard notions of constitution, realisation or composition, all of which are atemporal or synchronic determination relations, we propose to understand such metaphysical determination relations in temporal or diachronic terms. Cognitive processes are “creatures of time” (Nöe 2006), i.e., they are dependent for their existence on temporal unfolding over various media: some neural or bodily, others involving other people and the resources provided by an environment shaped by our cultural activities and patterns of practice. Extended minds are constituted diachronically. We summarise four key tenets of the third-wave view in the following table:
Key Tenets of Third-Wave Extended Mind
1. Extended Dynamic Singularities: some cognitive processes are constituted by causal networks with internal and external orbits comprising a singular cognitive system. 2. Flexible and Open-Ended Boundaries: the boundaries of mind are not fixed and stable but fragile and hard-won, and always open to negotiation. 3. Distributed Cognitive Assembly: the task and context-sensitive assembly of cognitive systems is driven not by the individual agent but by a nexus of constraints, some neural, some bodily, and some environmental (cultural, social, material). 4. Diachronic Constitution: Cognition is intrinsically temporal and dynamical, unfolding over different but interacting temporal scales of behavior. |
In our book we show how to interpret the predictive processing theory through the lens of third-wave extended mind. Predictive processing casts agents as generative models of their environment. A generative model is a probabilistic structure that generates predictions about the causes of sensory stimuli. We argue the ongoing tuning and maintenance of the generative model by active inference entails the dynamic entanglement of the agent and environment. In the following three posts we will put this third-wave perspective on predictive processing to work to argue for three theses defended in our book:
– Post 3: The Markov blanketed mind: There is no single, fixed and permanent boundary separating the inner conscious mind from the outside world. The boundary separating conscious beings from the outside world is fluid and actively constructed through embodied activity.
– Post 4: Seeing what you expect to see: The predictive processing theory sometimes claims that perceptual experience should be thought of as controlled hallucination. The contribution of the world is to provide a check on the brain’s predictive processes. We argue by contrast that predictive processing that generates conscious experience cannot be unplugged from the world but is exploratory, active and world-involving.
– Post 5: The diachronic constitution of extended consciousness: Adopting our third-wave perspective on predictive processing entails a new metaphysics of the constitution of conscious experience as diachronic, not synchronic.
References
Clark, A., and Chalmers, D. (1998). The extended mind. Analysis, 50, 7-19.
Hurley, S.L. (1998). Consciousness in Action. Cambridge, MA: Harvard University Press.
Hutchins, E. (2011). Enculturating the supersized mind. Philosophical Studies, 152, 437–446.
Nöe, A. (2006). Experience the world in time. Analysis, 66(1), 26-32
Sutton, J. (2010). Exograms and interdisciplinarity: History, the extended mind, and the civilizing process. In R. Menary (Ed.), The Extended Mind (pp. 189–225). Cambridge, MA: The MIT Press.
Wheeler, M. (2010). In defense of extended functionalism. In R. Menary (ed.), The Extended Mind (pp. 245-270). Cambridge, MA: The MIT Press.
“We argue by contrast that predictive processing that generates conscious experience cannot be unplugged from the world but is exploratory, active and world-involving.”
Can you give an example or two of how predictive processing is “exploratory” and “world-involving?” A great amount of AI uses predictive processing. None of it is AGI/strong AI . None of it is exploratory – able to take a single new step along a single new path. None it can cope with the “novel stimuli” of the real world. That’s why present computers and robots can only function in artificial mental & physical environments specially prepared to be predictable for them, and not real world environments. There are no robots roving the fields of the unpredictable real world .. The basic reason is simple – there is no way in the world to predict and prescribe for “unknown unknowns” (aka the new) – unless you’re Donald Rumsfeld or God Almighty. The real world is in the main excitingly evernew and only in parts boringly predictable (though you can never predict which parts) . It’s possible you may have been misled by AI hype which makes all kind of grand claims, like the ability to “create” and deal with “novel stimuli”. Looked at closer, it never can and always disappoints.
We agree with you that AI has had real trouble getting to grips with action in the real world. AI systems have struggled for instance with working out what is relevant and what is not in dynamically changing real world environments, a problem known as the frame or relevance problem.
Prediction-driven learning may however prove to be a potential breakthrough in artificial intelligence research because neural networks that employ this type of learning are able to learn without supervision from humans (e.g. Hinton 2007, TiCS; Linson et al 2018, Frontiers). They don’t need to be supplied with examples from humans of how to solve a problem, or of the desired output in the form of labelled images. They can teach themselves how to solve a problem such as deciphering handwritten text, or recognising images on the internet of cats, or translating speech.
A neural network that learns to recognise handwriting would seem to be able to select from the possible letters the handwriting could represent, the letters it is likely to actually represent. The neural network seems to know how to work out from the possible interpretation, which are relevant possibilities, and which it should ignore. Machines that use prediction-driven learning may thus be able to evade the relevance problem, at least within the narrow domains in which they function. The key question so far as we are concerned is whether these modest successes can scale-up to account for the common-sense understanding people bring to bear in their everyday engagement with the world. Neural networks that are able to self-teach through prediction-driven learning neural networks are not only able to self-learn, they are also able to self-teach exploring the space of possible solutions and finding the solutions that work best for themselves without assistance from humans. This may make all the difference when it comes to determining the relevance of their past learning to novel cases.
Self-teaching becomes possible through what in predictive processing is called active inference. We can distinguish two varieties of action that are inferred through active inference called “pragmatic” and “epistemic” actions (Friston et al 2015). Pragmatic actions are inferred by biological agents when they already know what to do to bring about some desired goal state. A sequence of actions can then be inferred that can be exploited to reduce prediction errors directly. Epistemic actions are actions the agent engages in to reduce uncertainty. They allow agents to find statistical structure that will enable pragmatic action in the long run (Friston et al 2015: 2).
The distinction between epistemic and pragmatic actions also speaks to another interesting distinction in the literature on choice and decision-making between model-based and model- free strategies. Active inference is all about the minimisation of expected uncertainty. Under active inference, learning action policies takes the form of working to optimise or improve the distribution of policies. Pragmatic actions can be thought of as minimally quasi-model-free in the sense that they are much more habitual and specialised, and therefore tightly tethered to the task domain. Epistemic actions, on the other hand, are model-based in the sense that they involve search and exploration over a larger number of possible action-policies. In this sense, model-based strategies are much more general than model-free strategies, speaking to how novelty enters the picture in the active inference framework (Chen et al. 2019).
With respect, you are indeed listening to the AI hype – something that can be predicted with near certainty on the basis of perfectly consistent experience in the past, will not be borne out. That’s why I asked for examples – and the only example you give of specific problemsolving/action is handwriting, where no real exploration is needed. (BTW my impression is v. few would-be AGI-ers hold out any AGI hope for neural networks – and there isn’t any).
Rational predictive processing of the old and routine is an interesting part of how unconscious human & animal minds work. The conscious mind however is there to deal with the new and unpredictable – creative *retrospective* processing. You are a moving camera taking a stream of moving images of moving (and therefore continuously form-changing) bodies. There is no way to predict the myriad changes of form bodies undergo – the myriad new dances in wh. they engage. The big deal is be able to work out what new behaviour just happened, what those changes of form mean.
The classic examples of this are all the arts – which all reflect human life – whose raison d’etre is to present new, unpredictable, exciting, shocking, disorienting forms and behaviours – which viewers seek out for their newness, and reject if they are predictable. The creative idea – esp. in the form of jokes – is the mind’s sex – the mind’s ejaculation. Our minds are designed first and foremost for new and unpredictable ideas, just as our bodies are designed for new and exciting physical sex.
P.S. If you want a simple counter-example of “creative handwriting” to test those programs, try them on Google figurative logos, where any body in the world can be substituted for the letters of “Google”. Humans handle them with ease – retrospectively. No NN or any other program of the type you instance could begin to.
Hi guys, Congratulations again on the book and it is great that we can now interact with you on it here at the brain blog!
I would like to understand better in what sense you are actually departing from Susan Hurley and more importantly the “decentering” you have in mind yourselves. You write:
“We depart from Hurley however in allowing for a decentering of extended dynamic singularities. As the cognitive anthropologist Ed Hutchins notes, some systems “have a clear centre, while others have multiple centres, or no centre at all” (Hutchins 2011, p. 5). The propagation of activity across various media is coordinated by a ‘lightly equipped human’ working (sometimes) in groups, and always embedded in cultural practices.”
Would you say that your “decentering” in the case of human cognition means that people “have multiple centres, or no centre at all”? But isn’t the person in a sense always a centre given that what matters him or her, what is relevant to the human cognitive system, is crucial for the cognitive activities that that person engages in? In the final sentence of my quote from your post above you actually seem to suggest something that also suggest a kind of centering, namely the coordination that happens by the “lightly equiped human”.
Hi Erik, nice of you stop by given how busy things are just now. So to answer your question we would agree with you in thinking of individuals as loci of coordination. In talking of decentering we had in mind distributed cases of cognition in which coordination is not centralised in any of the individuals but is spread across the activity of multiple individuals. Think of Hutchins’ work describing the cognition of the crew of the navy ship. In terms of Markov blankets, the blanket would temporarily form around the group but also around each of the individuals. This is possible because Markov blankets are nested, something we argue at length in the book and elsewhere. The Markov blanket would form around the group because the individuals that make up the group temporarily form a system that is self-producing and self-maintaining. The work of producing and maintaining the boundary is done by the group as a whole. The group considered as a single cognitive system therefore has no centre. But it also has many centres insofar as it is made up of individuals each of which has its own centre.