So let’s circle back to the combination problem for panpsychism, that got me on this topic in the first place. Despite reading and writing a lot about it, I’m still never quite sure what counts as ‘solving’ the combination problem. Often I read papers that say ‘here’s the combination problem’ and give a nice neat argument with interesting premises, as though the challenge is just to work out which premise to reject. Unfortunately, another paper says the combination problem is something completely different. (This is why I like to remark that the combination problem is a combination of problems.)
Other times the combination problem seems less like an argument, and more just a question: ‘how do particle-minds combine into human minds?’ This question is at once too hard and too easy to answer: it’s too hard to give the actually correct answer (just like it would be too hard to demand that Epicurus know the true theory of how atoms react and bond, before accepting his atomism), but too easy to just give one possible answer, to tell a speculative story in which minds combine in just the way we want them to.
In Combining Minds I offer ‘panpsychist combinationism’, trying to balance ‘work out which premise of this argument to reject’ with ‘sketch out a speculative story’, using the arguments to reveal constraints on what the story could look like. In this post I’ll focus on some of the ways that the mental combination required by panpsychism differs from the kinds looked at in the previous posts.
Here’s one: panpsychists think consciousness is a strictly fundamental property – not a superficial abstraction, not a high-level fuzzy-bordered pattern like ‘being a good listener’ or ‘macrofauna’, but one of the basic building blocks of reality, like ‘mass’ or ‘space’. And I argue that one implication of this is that panpsychists should accept unrestricted combination of consciousness. That is, we don’t divide composite things into those that do, and those that don’t, inherit consciousness from their parts. If I’m conscious because I’m made of conscious matter, so is the table, so is ‘all the air in this room’, considered as one whole, so is ‘my and all the air in this room’, considered as one whole, and so is the solar system, despite me using it in post 2 as an example of something clearly not conscious.
That’s because, while there are lots of interesting and important differences between one large-scale physical system (like me) and another (like this table), those differences are all ultimately matters of degree, with no fundamental break in between (see Goff 2013 for a longer version of this argument). It’s the same sort of stuff, just arranged differently, and in between any two arrangements, there’s an intermediate arrangement that’s more like each than the other is. On the hypothetical continuum from ‘matter organised like a brain’ to ‘matter organised like a table-leg’, there’s no non-arbitrary place to draw a sharp line dividing ‘conscious’ from ‘non-conscious’. There can differences of degree, and there can be fuzzy lines, but (I argue) fuzzy lines only really make sense for non-fundamental properties, which can be analysed into degrees on an underlying scale.
(Metaphysically-inclined readers may recognise this form of argument from David Lewis’ case for ‘unrestricted composition’, the view that any set of things in the world compose a whole (Lewis 1986 p.212 ). Even disconnected things, like the front half of some trout somewhere and the rear half of some turkey somewhere, can be treated as one and in that sense form a whole – Lewis calls this a ‘trout-turkey’.)
A lot of panpsychists, I think, see this profusion of indefinitely many overlapping conscious subjects – this ‘universalism’ – as something to avoid. They see the challenge as ‘how do we develop a panpsychist theory that doesn’t imply universalism?’ (see e.g. Mørch 2019, cf. Buchanan and Roelofs 2018) I think this is the wrong direction to go: I think universalism is the natural implication of treating consciousness as really fundamental, and really akin to the fundamental physical properties. After all, every composite of particles with mass has some mass, every composite of particles that occupy space occupies some space. In terms of the really fundamental building blocks of the universe, the differences between me and a table is superficial.
So for me, the challenge is not ‘how to avoid universalism?’, but ‘how to make ourselves ok with universalism?’ Here’s one important part of that: recognising that much of this multiplicity is ‘non-summative’ (as Sutton, referenced in post 2, would say). You don’t need to be a panpsychist to think that both my brain and I are conscious, but that this doesn’t mean there’s ‘twice as much consciousness’ as we thought. My consciousness is nothing in addition to my brain’s, and its consciousness is nothing in addition to mine. Likewise, even if ‘the solar system’, or ‘me plus this table’, or ‘me plus my friend John’ are all, strictly speaking, conscious entities, it’s not like each one is adding more consciousness to the world: they’re just different ways to ‘slice it up’. Just as a trout-turkey existing doesn’t mean there’s any extra gills or wings in the world, or any extra mass or charge, so accepting that a trout-turkey is conscious doesn’t mean there’s any extra consciousness.
(This is why I think that accepting ‘experience-sharing’, and the consequent qualification to the standard idea of mental ‘privacy’, is necessary for any version of combinationism – functionalist, panpsychist, or other.)
Here’s another part of making ourselves ok with universalism: to treat most of our talk about which things are ‘conscious’ as really about which things have a particularly interesting sort of consciousness. That’s because panpsychist combinationism is a theory about ‘consciousness’ in a quite attenuated sense. Philosophers often distinguish between ‘access-consciousness’ and ‘phenomenal consciousness’ (Block 1995), or more broadly between the various cognitive functions associated with our term ‘consciousness’ and the specific phenomenon of having an inner, subjective, experience (Chalmers 1995). The former – things like the capacity for introspection, verbal report, intelligent planning, etc. – are exquisitely complex, but in principle seem like the kind of things that natural science can explain.
We don’t yet know how evolution built a system which displays the kind of flexible goal-directed behaviour, and in particular the kind of self-monitoring, that we do, nor how to artificially construct such a system ourselves. But fundamentally it’s just a matter of getting the right causal pathways, the right input-output relations, the right information-processing. Yet no amount of those things seems to tell us why there is anything it feels like for the system to do this, anything it’s like ‘from the inside’. That ‘feeling like something’ is phenomenal consciousness, and because that’s what seems distinctively hard to explain, that’s what panpsychists posit as a general property of matter.
This means that what’s universally present isn’t consciousness in the form we’re familiar with it, associated with introspection, reflection, attention, and cognitive things like that. This kind of incredibly simple, inert, unreflective, undifferentiated consciousness is, I think, very hard for us to imagine, just because any imagining of it will be integrated into our introspection, reflection, attention, etc.
If I can quote a (panpsychism-unfriendly) comment from SelfAwarePatterns (on this post here, which I agree with Philip and Sam in thinking is attacking an absolute strawman):
If that’s the relevant question, here’s the relevant, unanimous, answer: no!
And the consequence – that this isn’t the sense of consciousness that ‘most of us find interesting’ – is completely right. It’s still a theoretically important sense, because it’s the most resistant to explanation, and it fundamentally conditions how we think about the richer sense of consciousness, but by itself it’s not what we’re usually interested in. When we ask, for instance, ‘are fish conscious?’, or ‘are plants conscious?’, the panpsychist’s easy ‘yes’ is not actually an answer to the question asked, which is better framed as ‘do fish/plants have the kind of consciousness we think is important?’ (Buchanan and Roelofs 2018 is a longer version of this point).
This is why I like to think of panpsychist combinationism and functionalist combinationism as complementary theories: the former is about bare phenomenal consciousness, and endorses an unrestricted, unconditional, sort of inheritance:
Experience Inheritance (EI): Whenever a part of aggregate x undergoes an experience [in the bare phenomenal sense], x undergoes that same experience. (Combining Minds, page 79)
…while the latter is about the sort of sophisticated cognitive functioning that we associate with the term ‘consciousness’, and endorses a conditional sort of inheritance:
Conditional Experience Inheritance: If a part of X has an experience which plays a certain functional role, then X shares that experience, with the same or similar functional role, to the extent that the part in question is connected to the rest of X such that its experience has sensitivity to, control over, and co-ordination with other events occurring in X. (Combining Minds, page 183)
Together, I hope, these give a coherent combined theory where bare phenomenal consciousness is everywhere (because there are no fundamental breaks in nature), but cognitively rich consciousness is present only in specific places (because there are lots of fuzzy boundaries in nature). Both sorts of consciousness are composite, but in different ways.
References:
Block, N. (1995). “On a Confusion about a Function of Consciousness.” Behavioral and Brain Sciences 18 (2): 227-287.
Buchanan, J., and Roelofs, L. (2018). “Panpsychism, Intuitions, and the Great Chain of Being.” Philosophical Studies. (co-authored with Jed Buchanan) ‘Online First’: 1–27.
Chalmers, D. (1995). “Facing Up to the Problem of Consciousness.” Journal of Consciousness Studies 2 (3): 200-219.
Goff, P. (2013). “Orthodox property dualism + linguistic theory of vagueness = panpsychism.” In R. Brown (ed.), Consciousness Inside and Out: Phenomenology, Neuroscience, and the Nature of Experience, Springer: 75-91.
Lewis, D. (1986). On the Plurality of Worlds. Oxford: Blackwell
Mørch, H. H. (2019). “Is Consciousness Intrinsic? A Problem for the Integrated Information Theory.” Journal of Consciousness Studies 26 (1-2): 133-162.
Hi Luke, nice, very lucidly argued series of posts! I’ve always thought that the strongest version of the combination problem runs via a notion of qualitative character and targets the rejection of emergence that panpsychism must be based on, because else we would just say that consciousness emerges with some configuration of basic physical properties. That’s supposed to be unintelligible and that’s why instead we must assume fundamental (proto-)phenomenal properties. However, whatever these may be like experientially, it seems clear they are qualitatively distinct from the experiences that we are familiar with such as those of pain or listening to Beethoven. But then not even any combination of fundamental phenomenal properties will give us the familiar experiential properties. So we would have to again assume some qualitative jump as in emergence. But then it seems we have merely moved the bump in the carpet. If emergence is inevitable here, why not assume it from the start? So I wonder how you would respond to that argument. From some things you say I get the hunch that you would claim that the difference is just a matter of functional organization and deny that there is a qualitative phenomenal difference?
That’s an excellent question Michael, and exactly right about the rejection of emergence. But can I first clarify what you mean by ‘qualitatively different’? If that means something like ‘absolutely different’ or ‘discontinuously different’, then yes I think I would deny that there is (or that we can know there is) such a difference (I’d say there’s merely a structural difference). But if it means specifically ‘difference in phenomenal quality’, then it sounds like this is something like what’s sometimes called the ‘palette problem’: how to get the (very many) different qualities of human experience out of the (presumably very few in number) basic qualities experienced by electrons or whatever. I think the natural answer is to think that different ways of combining the basic qualities can constitute different qualities, and that this sort of ‘phenomenal blending’ is plausibly something we can see in some of our own experiences. If someone gives me two unfamiliar smells at once, I’m quite likely to apprehend them as a single blended smell; the idea would be that even seemingly basic qualities in my experience, like ‘red’ or ‘mintiness’, are likewise blends out of qualities experienced by neural parts of me, which in turn are blended from qualities experienced by their parts, and so on down to whatever the smallest level is. I talk about this a bunch in “Phenomenal Blending and the Palette Problem”, and in chapter 4 of Combining Minds. Does that seem like an answer to the question you had in mind?
That does answer my question and pretty much as I suspected you would. However, I think your appeal to a notion of blending may be misleading and your smells example is ambiguous. Are smells here conceived as objective entities that still need to be experienced by a subject? In this case they may indeed cause a ‘blended experience’ which is identical to neither of the experience they would have caused by themselves. But can this be the model for how experiences combine on the sort of experiential atomism that you seem to be committed to? I don’t see how it can. It seems to me you are committed to the view that the atomic elements of our experience are just the same as for tables (or what have you) and that the difference can only be in the different configurations of these experiences. But if that’s correct, what can ‘blending’ mean here?
That’s right, we have to be careful with examples, and avoid (as William James puts it) ‘mistaking the combination of objects for the combination of feelings’. What’s of interest is not whether a stimulus that creates an experience is composed of two other stimuli, but whether the experience is composed of two other experiences. But recognising that distinction isn’t an argument against thinking that sometimes, it really is the latter, not just the former.
So I’m happy to be committed to thinking “the atomic elements of our experience are just the same as for tables… and that the difference can only be in the different configurations”. You wonder what ‘blending’ can mean here. Here’s what I say in the book to introduce the notion of blending:
“Consider the experience of seeing a red patch next to a yellow patch. Not only are you experiencing red, and experiencing yellow, and experiencing them together — you are also experiencing them as two distinct things. This feature of your experience is what I am calling ‘phenomenal contrast,’ and it goes along with (but is not simply the same as) various cognitive abilities, to do things like focus on the red but not the yellow, judge how sharply they contrast with one another, remember each color distinctly so as to recognize each if you see it by itself elsewhere, and so on. Phenomenal contrast is a feature of the composite experience’s phenomenal character, and it comes in many forms. (Compare the experiences of red above yellow, red below yellow, red on a yellow background, etc.) These are different ways to experience red and yellow together.
My claim is that not all ways to experience red and yellow together—i.e., not all composite experiences which subsume an experience of red and an experience of yellow—involve phenomenal contrast. Some present red and yellow to the subject, without presenting them as distinct, and thus without the subject’s being able to attend to one or the other, to judge their contrast, to remember each individually, and so on. I will call these composite experiences ‘phenomenal blends’ of red and yellow; their subjects experience red and yellow together in a blended way, not as two distinct things. Moreover, I cautiously suggest that sighted human beings sometimes undergo blended experiences of red and yellow when they experience the color orange, though this specific example is not essential to my case.”
Ch 8 of Ruyer (1952) discusses some of this:
“A crowd of very intelligent men bears a striking resemblance to a crowd
of stupid men or even animals or molecules…This example shows that the negation of the body or of matter as a distinct entity does not entail the affirmation that an autosubjectivity resides behind every ‘object’ or phenomenon. The molecules that make up a cloud, a machine, or the Earth can have a subjectivity as much as
the humans who make up a crowd. But the crowd, the machine, the cloud, and the Earth do not have one. ‘Physical existence’ designates a mode of bonding between elements, not a category of beings. If the interactions among the components are superficial in nature and propagate step by step, we will be fully entitled to speak of physical existence, even if each of the components is mental or intelligent.”
He does think that the “primary consciousness” of an organism is distinct from the “I-consciousness” associated with a brain, but intimately interacting.
Thanks David, that’s really interesting. It looks like Guyer is talking about the kind of ‘restricted combination’ version of panpsychism that I’m not sure can work. Some composites, like organisms and molecules, have an ‘autosubjectivity’, but others, like crowds or machines, don’t. I guess I think, as I said, that this sort of distinction is hard to maintain. It looks like he’s trying to draw it in terms of whether “the interactions among the components are superficial in nature and propagate step by step”; I’m skeptical that this really marks any fundamental difference between the interactions within a machine and within an organism.
Hi Luke. Given Ruyer’s book is Neofinalism and he read extensively in embryology and cybernetics, his answer would be something along the lines of:
Every explanation of organic teleology that relies on an analogy to machines amounts to explaining internal teleology by means of external teleology; but in both cases, a teleology is always at stake. The cruder the mechanism, as was Descartes’s, the cruder the corresponding teleology…
The brain or even the nervous system cannot be deemed to have monopoly over memory, habit, invention, signifying activity in general, and finalist behavior, nor even over consciousness, conceived as the proper subjectivity of the organism. It cannot have monopoly over memory, for the good reason that in ontogenesis, the brain is remade de novo from an egg… Last, the brain cannot have monopoly over consciousness. This point is more delicate, not because there is any reason to hesitate over it, but because an essential and difficult distinction becomes necessary. The brain certainly has monopoly over sensory consciousness, that is, a consciousness whose “information content” is supplied by sensory organs modulated by external stimuli. Indeed, it is paradoxical to say, with Bergson and several other contemporary authors, that the area striata, for example, this cortical retina, is nothing more than a center of movement. But the brain does not have monopoly over what could be called organic consciousness, whose “content” is constituted by the organism itself or by its living elements. The term “content” should be taken here in the particular sense of “information content.” What “informs” psychological consciousness (if we tentatively ignore kinesthesia) are the objects of the external world, their pattern, which is transmitted more or less faithfully by sensory organs. What “informs” primary, organic consciousness, in contrast, is the form of the organism, its formative instincts, and the instincts directed toward a specific Umwelt.
In all organisms proper, organic memory constitutes specific potentials that
can be reincarnated in innumerable individuals. In physical beings, no enrichment of this kind is found. The semisubstantialization of activities into “mnemic beings” does not take place for physical beings…This crucial fact aside, a perfect isomorphism exists between the finalist activity of higher organisms and the activity of physical beings…
[In a organismic] composite system, a transfer of activity from elements to the system is possible, and vice versa, a transfer that corresponds to the increase or decrease in interaction and in binding forces…individuality of the system individuality of elements. Because activity is synonymous with freedom, we can say that in a system that loses its unity, the elements reassume their own activity and their freedom, which had been partially mobilized when the system acted as an individual.
Anyway, I can’t summarize the book – he is often quoted by the enactivists as well as Deleuze etc.
Thanks David, but maybe you can help me out with how to understand these claims. I’m not sure whether to read them as asserting a distinct mentalistic mechanism or force, which opponents would deny the existence of, or as describing widely-accepted mechanisms or forces in mentalistic terms.
Like, when you write “[the brain] cannot have monopoly over memory, for the good reason that in ontogenesis, the brain is remade de novo from an egg”, is that saying
1. ‘To explain how ontogenesis creates brains from eggs, we need to posit a special, and mentalistic, power of memory (the ‘semisubstantialization of activities into “mnemic beings’), as opposed to just appealing to chemical processes like enzymes and RNA reading DNA’?
or
2. ‘The widely-accepted chemical mechanisms by which ontogenesis creates brains from eggs (enzymes, RNA, etc.) should be recognised as instances of the mentalistic category ‘memory’ (the ‘semisubstantialization of activities into “mnemic beings’), just like some brain processes are’?
I’d evaluate those two claims very differently – the first sounds like it’s undermined by the progress of molecular biology, while the second isn’t, and instead raises questions of conceptual analysis (what does ‘memory’ mean, etc.).
But it also looks like only the former supports a sharp boundary between machines and organisms – if the claim is just the latter, I’d want to question why nothing that a machine does, even a self-repairing, self-updating, machine, counts as ‘the semisubstantialization of activities into “mnemic beings”‘.
(For what it’s worth, I think contemporary panpsychists are almost universally aiming to make claims of the second sort, not the first: they want to maintain complete deference to physics, chemistry, etc. on all questions about causal mechanisms, but posit phenomenal consciousness as the intrinsic accompaniment to those mechanisms.)
Hi Luke. Undoubtedly the second. Ruyer has it that “Buying bicarbonate at a pharmacy is an act of the organism, just like secreting pancreatic fluid. Both of these acts have the same goal [even though mechanistically different]…It is even more difficult to see in X, our neighbor with whom we are chatting, the embryo that his actual body continues and, in the effort he makes to speak, the sequence of efforts that this embryo had to make to constitute a larynx and a tongue for itself.”
As to machines: “This machine has a unity, yet clearly not a proper unity, because it results from the play of amboceptors assembled by the engineer and because, for want of maintenance and inspection, the machine quickly reverts to a state of scrap metal. No doubt, in both this and the cloud’s case…[only] the metal molecules or atoms have to be deemed to have a ‘for-itself’ of their own, because they actively retain their form and their unity in the absence of any external maintenance.” So a self-repairing machine might go part of the way to a primary consciousness, depending on just how flexible it was in meeting its goals.
So I think Ruyer’s consciousness is inherent/supervenient in any unitary system that exhibits homeostasis, and at the higher level “a surveying unity”, and equipotentiality ie material can be repurposed (or re-purposed), eg cortex “[which] retains the equipotentiality of the embryo that derives from the equipotentiality of the egg”. He doesn’t know what kinds of bonds are needed to create a unified consciousness.
Anyway, this is drifting away from your ideas – Ruyer seems to think the unity of the “secondary” I-consciousness is disconnected from the primary consciousness of the organism, even though they share the same goals. He gives the amusing example of sex.
Yeah, I see – I should confess that I don’t engage much with questions about the metaphysical status of teleology and purposiveness in my work. I will say that I think (and I feel as though I’m sort of channelling Spinoza here, though perhaps through my own misinterpretations) that inanimate things can also be seen in terms of ‘homeostasis’: the physical forces holding together a rock, or a raindrop, do serve to maintain certain features of its arrangement (shape, cohesion, etc.) through a variety of perturbations. And I feel like ‘how flexible something is in meeting its goals’ is exactly the sort of continuum on which we can’t draw any genuinely sharp lines.
But I do like the idea of distinguishing a narrower ‘I-consciousness’ from something broader and deeper, which is more tied to general characteristics of organisms, more than to brains and neocortexes. I think it comes up particularly in trying to connect the very general claims of panpsychist combinationism to the narrower claims of functionalist combinationism. In chapters 5 and 6 I distinguish ‘inclusionary’ and ‘exclusionary’ approaches, which differ in how they think about the relation between these two.
Please do not waste labour without addressing the “question of identity” (ref The Ship of Thesius).
The dichotomy of 1) Part and 2) Whole cannot be solved by balck-and-white thinking when there is no border in the universe.
Galen Strawson is right that “Matter” is mystery and not consciousness. We need a just model of matter before venturing to explain consciousness.
Consciousness is simple and easy.
Ability to respond to stimulation, as conditioned, is consciousness.