What follows are some excerpts from a manuscript on the PCC (psychological correlates of consciousness) that I’ve been working on for a while, which will perhaps never see the light of day. Philosophers, feel free to have a field day with my naive ideas.
Note I’m not sure which of the ideas are mine and which I stole from others. Of the inspirations I remember, two stick out. One, Gregory suggested that qualia function as a ‘tag for the present’ (see his paper on this here), and Trehub has also suggested similar ideas in the domain of space (based on his book The Cognitive Brain).
The Perspectival World Model (PWM) hypothesis is a theory about the contents of consciousness, the main idea is in what I will call Claim (1):
The contents of consciousness are a model of events that are currently evoking activity in a subject’s sensory receptors. The model at a given time is called the perspectival world model, or PWM.
Implicit in (1) is that the PWM is perspectival in both a spatial and temporal sense. First, the PWM is spatially perspectival. No matter how hard we try, we cannot see the penny which we know is behind the coffee cup, things behind our back, or the front cover of a book whose spine is facing us. What child who has read comic books hasn’t tried with all his might to see through clothing or engage in other perceptual feats that would violate this limitation on perception? Despite these limitations, bring any of these objects into the right spatial relation to our sensory receptors, so that they activate our retinae for instance, and they can become integrated into the PWM.
Second, the PWM is temporally perspectival because it is a model of what is presently evoking activity in our sensory receptors. It seems there is a small time window of integration within which sensory inputs can have an effect on the PWM. The prick of a mosquito bite, the bright flash of a camera flash, the experience of my coffee cup on the table, are all representations not of what affected my receptors last week or next year, but of what is happening now.
It seems that the constructor of the PWM is a quite radical empiricist, building into the model only features that are supported by current direct sensory evidence. While this imposes rather severe limitations on the PWM, it should not seem surprising that organisms would benefit from building a model of what is happening here and now: that is where most of the biological exigencies are in the struggle to reproduce, avoid being eaten, and to acquire food.
The PWM model of consciousness has lots of potential philosophical implications. I’ll mention one. It seems to render the ‘perspectival’ nature of conscious experience nonmysterious. The ‘first-person’ or ‘perspectival’ nature of conscious experience is often taken to be one of its profound properties that is not amenable to “objective, third-person” scientific methods. While at this point in time consciousness is admittedly a mysterious feature of the universe, it is not because of its perspectivalness. Perspectivalness is built into the PWM from the start, and is a perfectly physical spatial and temporal perspective that emerges from a system struggling to model what is causing its sensors to be activated at the present moment in time. To use philosophers’ jargon, the perspectivalness of consciousness is no more mysterious than the perspectivalness that is part of the semantics of indexical expressions such as ‘here’ and ‘now.’
So that is the skeleton of the model (I’ve left out a discussion of some features of the PWM, such as its limited capacity relative to the rest of the information processing going on in our brains, and whether the PWM is modal, amodal, or multimodal).
I’ll now mention some of the most obvious potential concerns and problems with the hypothesis.
1. PWM not sufficient for consciousness
There seem to be lots of unconscious PWMs in our brains. The human mind carries out a whole host of sensory and motor tasks using cognitive machinery to which we do not have conscious
access. Examples include our ability to effortlessly dart our eyes to and fro when confronted with a crowded visual scene, to carve time-varying airborne pressure waves into sentences, and to solve complicated motor control problems such as hitting a baseball moving at over 100 mph.
So, what must be added to a brain-model of what is happening here and now to make it conscious? I have lots of ideas, none of which I’m confident about. I touch on them a little bit in numbers 5, 6, and 7 below.
Note that Richard Gregory thinks that qualia tag the present so we don’t confuse representations that are memories (e.g., the memory of being stung by a bee) from those that tell us what is happening now. However, there are all sorts of unconscious representations of what is happening now (e.g., activity in the retina), so he runs into the same problem.
2. What about experiencing the past and future?
If qualia tag the here and now, what of imagining things in the past and future? The experiences of such imaginings are likely due to activation of the sensory cortices during such episodes, which then produces a PWM. But why, then, do we not confuse imagined from perceived events? I’m not sure (but see 3 and 7).
3. Hallucinations, dreams
These are states in which there is nothing in the world corresponding to the model. Consciousness most likely evolved as a mechanism for perceiving events during ordinary waking states, not for use in dreaming or hallucinations. Dreams and hallucinations seem like an off line use of the PWM machinery caused either by altering the inputs to the PWM-generating mechanism, altering the PWM-generaing mechanism itself, or both.
4. What is the time window?
While the PWM tags the present (at least that is the temporal content of the model), what is the time window within which the PWM-building mechanisms integrates information to construct the model? Is it fixed, flexible, does it depend on the sensory modality?
5. Modal, amodal, or multimodal?
Seeing a wooden cube and touching a wooden cube are quite different experiences, suggesting that the PWM carries information about which receptors are being activated, i.e., that it is not amodal. But if you touch and look at the cube at the same time, is the information integrated into a unified multimodal representation? Or perhaps, are there multiple conscious PWMs, one for each modality, that “feel” unified because they happen to overlap appropriately in space? I tend to look at the PWM as a spatially integrated multimodal model. The modalities are not merged into one amodal soup: the PWM represents felt and seen cubes differently, at least in the sense that you can differentiate these aspects of the PWM and know the modality to which these aspects belong. See 6.
6. Space
The PWM represents the spatial location of events in the world. How do the spatial maps for the modalities relate to one another? Is the unifying PWM an egocentric model that is “ready” for use behaviorally? I.e., the most abstract and unified representation of the spatial layout of things in the world that is still egocentric, but which is most likely to be useful to guide behavior with respect to the world?
For instance, most early sensory representations have a receptor-surface-based frame of reference. These might need to be converted to a more usable format, one readily convertible to behavior. If I see and hear that there is a tiger in front of me, my actions
should be the same: get my body away from the tiger! I shouldn’t move my eyes and hears so I don’t perceive the tiger, but move my body to escape.
Perhaps Pete Mandik’s theory would help me out here.
Note similar questions that I asked about space could be asked about time. That is a very confusing topic, though.
7. What is the PWM used for?
Is it used for planning in the near future, distant future, to help form autobiographical memories, to help us in fine motor control as when sewing or riding a bicycle? My hunch is that we use it for longer-term planning. I see a tiger and plan my escape. But see also number 6: perhaps what is essential is that it is a component of a more general process of converting all these different modalities into an integrated spatial coordinate system that can be used to guide behavior. None of these uses are mutually exclusive: it may be like asking, “What is memory for, helping us find food or find mates?”
8. Nonsensory experiences
The PWM is supposed to be a model of what is activating our sensory receptors. But what about experiences such as the tip of the tongue phenomenon? Looking for the word ‘perseverate’ has a distinct qualitative feel, but it isn’t clear at all that this is a model of a sensory phenomenon. Perhaps I would have to revise the hypothesis so that the PWM includes models of what is happening in other brain regions, like the brain region looking for the word ‘perseverate.’ But then the PWM is a misnomer, unless we include in the ‘world’ other brain states.
On the other hand, why not start a model of the PCC with a model of typical perceptual consciousness? My hunch is that with a detailed such model in place (for which the PWM seems to have fewer problems) it will become more clear how it could be expanded to more difficult cases like hallucination, thinking of the past, and the experience of trying to figure out what that word is for getting caught in a cognitive rut.
I look at 1-7 as “friendly” problems that are all in the spirit of the PWM, and think addressing them would help clarify its structure. Number 8 is less friendly. I can either temporarily classify it as a friendly problem and deal with it the same way I hope to deal with the problems of dreams, hallucinations, and imagining the future. Or I can face it head on, as it may point to a severe weakness in the overall proposal, one whose solution will point to a very different model than the PWM.
How would you explain the “half-second delay” as shown in Libet’s 1982 paper “Brain stimulation in the study of neuronal functions for conscious sensory experiences”- Human Neurobiology 1, 235-42
Here are the pertinent results:
·In order to “consciously experience” a sensation, it must apparently bounce around the somatosensory cortex, or some other “high-level” area of the cortex for about half a second, probably isolated to the frontal areas
·“A touch on the skin that the subjects would otherwise have reported feeling was retroactively masked up to half a second later by a stimulation to the cortex”[blackmore]
Now, obviously, a half-second is a lot of time and we are able to react a lot faster than that(stepping on a snake, etc). So it seems like consciousness originally evolved for some other reason than making us directly aware of our perceptions. In other words, perception seems to work just fine without consciousness and in evolution, “if it ain’t broke, why fix it?”
So, it seems to me Libet’s experiment seems more line with a Dennett-esque “neural echo” for narrative selves than a model for dealing with the sensory “present”.
Everyone has a different interpretation of Libet’s work though, but it struck me as relevant to the PWM model.
I don’t really get this idea – as I don’t know what you mean by qualia; there are numerous possibilities: (1) qualia as quined by Dennett, i.e., intrinsic, private, ineffable properties of experience; (2) not necessarily ineffable but private & intrinsic properties of experience (most philosophers such as Galen Strawson takes this view); (3) private & partially ineffable properties of contents of experience (van Gulick)… And so on, you can substitute properties with processes (this changes a lot), you can stipulate that qualia are not-functionally-realizable. But note that in reality, there seem to be no intrinsic properties whatsoever in the empirical world (intrinsic in Russell’s sense = determined only by the internal parts of the object), as most object are determined by other physical objects, and so is our experience and its contents.
Now, tagging requires tags – timestamps or spacestamps, probably. So is there any neuroanatomical evidence for anything like that, and how does it relate to qualia (note that on some accounts, you cannot have any neuroanatomical evidence for anything that has anything to do with qualia).
Sorry for being fussy, I just need to be clear – the idea of perspectives seems nice but vague without further specification.
Marcin: You seem to have read the title of the post very closely.
1. On the word ‘qualia’: I don’t want to get into side issues about
defining qualia. Too philosophical for me. I used the word in the title
just to be cute. The PWM is just a psychological model of the
contents of conscious experience. Seeing a dog, feeling an itch,
whatever your canonical examples of awareness, conscious perception,
are, that’s the kind of thing I’m talking about. If you don’t think
such things exist, then the model isn’t for you.
2. On the word ‘tag’: don’t get hung up on it, not a substantive part of the post.
3. Perspectivalness: I discuss the sense in which the PWM is
perspectival (in a spatial and temporal sense) in two paragraphs in the
post.
The time window of integration used by the mechanisms that construct the PWM would be a parameter of the model that would have to be determined empirically.
As I allude to in list number 7I tend to agree that the PWM isn’t <i>for</i> the here and now, even though it is <i>about</i>, or modelling, the here and now. This is a cool paradox. It is probably used for longer-term planning and behaving. When we pull our hand away from a hot stove, we usually don’t feel it until our hand is already well away from danger. How, then, is consciousness useful, since we behaved just fine without it? It is exactly these kinds of considerations that motivated my discussion in number 7.
“As I allude to in list number 7I tend to agree that the PWM isn’t for the here and now, even though it is about, or modelling, the here and now. This is a cool paradox. It is probably used for longer-term planning and behaving.”
Okay, I see what you mean. I have a similar pet theory that agrees with you that [insert phenomena here] is utilized for purposes over than direct perception of the environment for short-term actions. I think the planning/memory/empathy systems all blossomed out of a single evolutionary brain-event that was originally for social purposes and eventually “accelerated” to the current overly top-down stage that we are in now. Does that make any sense?
Gary: I’d be surprised if all those things evolved in a single event, but it would be interesting to see evidence. I avoid evolutionary theorizing for the most part, and try to focus on how things work now that we have these evolved systems. This is partly because getting the evolutionary story right probably depends on getting the story about the present phenotype right, and we are a long way off from that at this point in time.
Hi Eric:
What if somebody objected that the PWM isn’t *sufficient* for consciousness? Would that be an “unfriendly” objection of type 8? Or maybe the following should go under your 2, or 7, or 3? Regardless, I mean the objection to be friendly! But it seems to me that many things that we are conscious of are not elements of our immediate sensory content (one of the great advantages of consciousness!) I can easily keep my mind on where the food and the girls were (if not on how to get back to them) *despite* dramatic fluctuations in what I’m currently sensing. Doesn’t that alone prove that the current world-model isn’t necessary for what I’m conscious of (i.e. the memory content)? If a current PWM is necessary for the experience of memory content I’d be curious about how you think the connection works…
Hi Ryan: I agree that a PWM isn’t sufficient for consciousness (problem number 1), and I think this shows that it must be supplemented with more details as mentioned there. I discuss your good question about conscious recollection of the past, or imagining future events under number 2 (and a little under number 3).
As I said there, I don’t yet have any good ideas yet on how to handle this problem. It may be a less friendly problem than I originally thought. Sure, when I imagine looking around my office at work my visual cortex is activated (this has been shown in fMRI experiments), so this could be providing inputs to the PWM generating mechanism. The big problem is that the phenomenology when I imagine my office is very different from that when I am in my office actually looking around. When I (visually) imagine my desk, I have no color qualia, don’t
literally see anything (none of those ‘faint copies’ in my
phenomenology). Also, recent studies show that though visual cortex is activated by imagination, it isn’t necessarily the same places in V1 that are activated when diretly perceiving the objects in question (the study is here). This could actually help me, as if it activated the same regions of V1 in the same way, then that would be a direct deal killer: I would predict the PWM to be the exact same, and so expect to have a vivid phenomenology.
How does the PWM differentiate imagined from real seeing to provide these different phenomenological flavors? I have a couple of ideas, neither of which are convincing to me yet. One, imagery seems to rely on working memory: activated items are “placed” in working memory and we have an experience of those objects. The PWM could be modelling the activity of this working memory system presently activated instead of things in the world (so this is related to unfriendly objection 8). A second possibility is that the imagined events in working memory structures are treated as a different modality by the PWM. So imagining X and seeing X are as different as seeing X and touching X. I’m not sure how much research has been done on imagery and working memory. I assume a lot, but I am not familiar with it.
A third possibility is that the contents of the PWM are the contents of working memory, which means working memory includes not just the usual ‘visuospatial sketch pad’ and ‘auditory buffer’, but also the “perceptual experience” buffer. That is, visual experience is a kind of working memory you get “for free” without much effort, just by opening your eyes, and just as the elements in working memory are thought to be for longer-term problem solving and decision making, perhaps the same can be said of the contents of the PWM.
Tons of interesting issues here I’ll have to think about if I ever want to really get this stuff off the ground. Metzinger’s work resonates with it, but I find it very hard to understand what he is saying a lot of the time.
PS My likely move is to modify the PWM to be clear that it is solely a model of perceptual consciousness, that it isn’t clear whether it will work for imagination and the like. I’d be perfectly happy with a model of the contents of perceptual consciousness to give a foothold on the psychology of consciousness.
1. I follow a simple rule of thumb – don’t use a concept if you don’t know how to define 🙂
Everyone agrees that we see dogs and feel itches (if we’re normal human beings), so there is nothing special in it. Qualia claims, on the other hand, belong to metaphysics, and have to be clear.
Anyway, if you think that the conscious experience is about the external world as perceived by our senses, what about memory and hallucinations, and dreams? They are experiential as well, and perspectival as hell 😉
Marcino: I discuss those in the post and these comments. Again, you might read the actual post.
Marcin: Incidentally, since you won’t let it go, I define qualia as the occurrent contents of consciousness. I gave some examples of the contents of consciousness. Obviously it is hard to give a positive, explicit definition of qualia (even Block couldn’t do it).
But this doesn’t bother me, as I also follow a simple rule of thumb: prematurely precise definitions are the bane of philosophy. The precise definitions come after the science, not before. I’m happy with extension-fixing characterizations. That’s what we need at first: pick some canonical cases of what we are interested in (e.g., binocoular rivalry, the cutaneous rabbit), and study the hell out of them until they are explained. Then the precise definitions will start to come. Patricia Churchland: ‘[S]cience essentially leads rather than follows definitions (Daniel Dennett wittily dubs the eagerness to forge definitions in advance of adequate data as the ‘heartbreak of premature definition.’)’ (from the Computational Brain, p 446).
Your pedantry is endearing but come on how about something substantive Marcin?
Eric–
I like the PWM idea. I was going to mention Metzinger in this context, but I see you’re aware of that (and I agree it’s not always transparent what he means!). Also, you might consider the “higher-order” theories of Armstrong, Rosenthal, or Lycan.
Questions:
1. What is the representational format of the model? Is it like a complex map? A set of sentences? Answering this question would help get the qualia-obsessed philosophers off your back–either you’d make it clear you’re not using “qualia” in a spooky way, or expose that you are (I take it for granted that you are not making any dualist claim, but that word sometimes is taken to imply such commitments).
2. Can the model be in error about what happens at the sensory level? Can we misinterpret a red sensation for one of green? Pain for a tickle?
3. How might learning affect the model? Is it “theory laden”? How much might top-down influences alter the model?
Thanks!
Josh. Thanks for the great questions–
1. Representational format
My old undergrad advisor got on me about this I am agnostic about the representational format. Since I think pretty much any type of content can be represented in any format, it shouldn’t be important. So, the vehicles of the model are whatever neuroscience says in 50 years. In the meantime, this is a psychological theory which we can access using psychophysics (e.g., what is the time window of integration) and less fine-grained neuroscience techniques (fMRI and other techniques to look at unimodal and multimodal issues and spatial represenations). It could even be implemented in ectoplasm as long as it gets the functional details right. I talk about the psychological correlates of consciousness just to avoid metaphysical discussions (similar to neuroscientists and the discussion of the NCC).
2. Errors
I haven’t thought about that. It is an interesting question. To step back, we have this little ‘consciousness box’ modelling events in the world that are generating activivity in its sensory receptors.
Let’s consider two main types of “errors” (there are others but I’ll try to stay focused). One, modality errors: mistakenly assigning an input from the visual system as coming from the auditory system, so when something hits the retina you hear a sound. This could look something like synesthesia. And then there are intra-modal errors: mistaking a pain for an itch for instance, or even getting the location of something wrong. I don’t see why this wouldn’t be possible, though I don’t have anything to say about it.
3. Top down effects.
Another excellent question. My hunch is that it doesn’t depend much on our explicit theories (e.g., you can’t make the muller-lyer illusion go away just by thinking about it, so it is modular in that sense. Also I would guess that aboriginal tribesmen have a very similar PWM to us, even though their interpretations of the events perceived would be quite different (e.g., airplane versus god passing above).
However, that doesn’t mean the percepts in the PWM are stable. I’m thinking of the duck-rabbit phenomenon here. As in most neural systems, there are probably some top-down influences. You can bias people to see either the duck or the rabbit, either by focusing them on certain parts of the picture or talking about rabbits or ducks before showing them the picture. So while we would have limited control over the PWM, there are probably ways to bias it.
Of course some of these things I’m talking about could be lower-level sensory things, or be part of attention, and so perhaps influence the inputs to the PWM generating mechanism rather than the mechanism itself.