This week, I’m blogging about my new book, The Epistemic Role of Consciousness (Oxford University Press, September 2019). Today, I’ll discuss the epistemic role of consciousness in perception.
Human perception is normally conscious: there is something it is like for us to perceive the world around us. And yet there are unconscious analogues of perception in which our perceptual systems represent information about the external world in way that makes no impact on conscious experience. For example, patients with blindsight have large blind spots caused by neural damage in the primary visual cortex. When a stimulus is presented in the blind spot, many of these subjects do not experience it at all. But when they are forced to guess what it is, e.g. an ‘X’ or an ‘O’, they can often give the right answer with a high degree of reliability. What explains this remarkable ability is the unconscious visual representation of the stimulus in a subcortical region of the brain.
Can patients with blindsight acquire knowledge about the blind spot on the basis of unconscious visual information? Over time, they can learn of their own reliability and so come to know that their guesses are probably true. But we tend to think that our perceptual knowledge of the environment is rather more immediate than this. For instance, you can know just by looking whether the lights are on right now and you don’t need to infer this conclusion from premises about your own reliability. Given this point, a better question to ask is whether patients with blindsight can acquire non-inferential knowledge about the blind spot on the basis of unconscious visual information. In Chapter 3: Perception, I argue that the answer is no.
Consider an analogy. Suppose you’re in hospital under a general anesthetic. While you’re sleeping, a neuroscientist implants a mechanism in your brain that registers information about objects outside your conscious field of vision. When you wake up, the doctor holds her hand behind your head and asks you to guess how many fingers she is holding up. To your surprise, the implanted mechanism enables you to reliably guess the right answer. Can you know the right answer without inference? Presumably not, since you have no justification to believe that one answer is more likely to be true than any other. Under the circumstances, the only justified attitude is to withhold belief.
I’m inclined to say exactly the same in the case of blindsight. In case you’re not persuaded, however, here’s an argument designed to bring you around. The first premise is the empirical observation that patients with blindsight tend to withhold from forming beliefs about the blind spot until they have learned about their own reliability. They can be prompted to guess, but they are surprised to learn that their guesses are reliable. The second premise is that the normative observation that this reaction seems entirely rational. Patients with blindsight are no less rational than the rest of us: theirs is a perceptual deficit, rather than a cognitive one. The third premise is that the epistemic principle that if unconscious perceptual information in blindsight provides evidence that justifies beliefs about the blind spot, then withholding belief about the blind spot is less than fully rational. After all, epistemic rationality requires conforming your beliefs to the evidence. From these three premises, I conclude that unconscious perceptual information in blindsight doesn’t provide evidence that justifies beliefs about the blind field after all.
If perceptual experience justifies belief about the external world, then what’s missing in blindsight? It’s not enough to point to causal differences between perception and blindsight, since these differences can be stipulated away. As we’ve seen, patients with blindsight don’t typically form beliefs about the blind spot without relying on premises about their own reliability. But we can easily imagine a patient with super-blindsight who spontaneously forms beliefs about the blind spot on the basis of unconscious perceptual information. Still, we can ask, are these beliefs justified? And the answer seems no different from before. Withholding belief is still the only rational reaction. The mere fact that a subject is disposed to form beliefs is not enough to make them rational. Indeed, our patient with super-blindsighter seems irrational to the extent that he is disposed to form beliefs about the blind spot without inference.
What’s missing in blindsight, I argue, is phenomenal consciousness. Perception justifies belief about the external world if and only if it is phenomenally conscious. Moreover, perception justifies belief in virtue of its phenomenal character alone. It is solely because of the phenomenal character of perceptual experience that it justifies belief without standing in need of any justification. One consequence of this claim is that we can avoid the new evil demon problem: your perceptual experience justifies beliefs about the external world even when you are deceived by a Cartesian evil demon (Cohen 1984).
Why must perception be phenomenally conscious in order to justify beliefs about the external world? I address this question in the second part of the book, but here is the short answer. Perceptual experience does a kind of double epistemic duty: it not only justifies beliefs about the external world, but it also justifies beliefs about perceptual experience. More specifically, perceptual experience justifies beliefs about the phenomenal features in virtue of which it justifies beliefs about the external world. When you have a perceptual experience in which it seems to you that p in the absence of defeaters, you thereby have justification to believe:
- It seems to me that p in the absence of defeaters [by introspection].
- If it seems to me that p in the absence of defeaters, then I have epistemic justification to believe that p [by a priori reasoning].
- Therefore, I have epistemic justification to believe that p [by deduction].
This means that your perceptual experience cannot justify believing that p without also justifying the belief that you have justification to believe that p.
In contrast, unconscious perceptual information in blindsight cannot play the same dual role as perceptual experience, since it doesn’t justify beliefs about itself. It has no phenomenal features that seem upon reflection to justify beliefs about the external world. If we suppose that blindsight can justify the belief that p, without also justifying the belief that it justifies believing that p, then we must be prepared to countenance scenarios in which you have justification to believe abominable conjunctions of the following forms:
- p and I don’t have justification to believe that p; or
- p and it’s an open question whether I have justification to believe that p.
I argue, however, that you can never have justification to believe abominable conjunctions of these forms. This bolsters the arguments in Chapter 3 that perception must be phenomenally conscious in order to justify beliefs about the external world. What does this mean for the mental lives of zombies? Zombies, like the zombie systems within us, can represent information about the world around us. But zombies cannot acquire knowledge, or epistemically justified belief, on the basis of their unconscious mental representations. And yet perception is a source of knowledge under the right external conditions. Hence, zombies do not perceive the world. If you like, we can say they perzieve the world. But perzeption is no substitute for perception. Although it can help you to navigate the world, it cannot give you knowledge of the external world.
What about the opposite situation eg phenomenal consciousness without knowledge in those acquiring sight after prolonged blindness, or learning to use those sensory substitution devices that present vision to one’s tongue or skin? What those individuals acquire with practice are unconscious processing skills (the right “internal” conditions), whose veracity they test in the same way as the blindsighted individual. And they merely recapitulate (more or less successfully) what we all did in infancy. Children developing in a lawless environment would come to conclude that phenomenal consciousness never offers any warrant.
David, I don’t know what kind of visual experience someone has when acquiring sight after a prolongued period of blindness. I suspect it might take a while before they start having visual experiences of coherent physical objects. Things might look pretty chaotic for a while. But then my view wouldn’t predict that they have justification to form the same beliefs as you or me. My view just says they have justification to believe whatever contents are represented in visual experience.
Hi Declan. I guess I am of the persuasion that a) the contents of sensory experience result from extensive nonconscious processing; b) there are inter-individual differences in reportable access to this information; c) that perceptual learning acts at the nonconscious level. So in your example of different types of blindsight, if the externally validated accuracy are the same then I would regard them as completely equivalent in terms of warrant. It seems to me that automatic suppression of some percepts (I am listening to a tinnitus right now) and metacognitive features like “pop-out” represent induction being carried out by nonconscious neural networks.
If blindsights and super-blindsights often form reliable perceptual judgments or beliefs from subliminal inputs, why wouldn’t they have perceptual knowledge? More than this seems to me too demanding! That is why I do not see why phenomenal consciousness has an epistemic function. Subliminal input epistemically authorizes the agent to believe what he perceptually believes. If such a belief is furthermore true, the blindsight knows what he/she believes.
Moreover, the mention of zombies makes little sense in the course of the argument. Zombies may not even be metaphysically possible, because deep down they are not really conceivable (that’s my position). Now, but if you accept them as metaphysically possible, they cannot be blindsights, precisely because there is no damage in their visual cortex; after all, they are physical and functional replicas of normal humans.
Roberto, my arguments are designed to challenge the thesis that reliability is sufficient for knowledge. Suppose the superblindsighter believes that p on the basis of a reliable process, but has no evidence that she has any such reliable process available to her. Is she justified in believing the Moorean conjunction, “p and I have no justification to believe p”? That seems really counterintuitive to me, and in the second half of the book, I try to support this intuition with various arguments.
Externalists often worry that internalist theories of knowledge and epistemic justification are too demanding, but I try to address this in the book, e.g. I argue that human infants and non-human animals can acquire knowledge on the basis of perceptual experience.
On zombies: I agree with you that there are functional differences between zombie perception and blindsight or super-blindsight, but I argue in Ch.2 that these functional differences make no epistemic difference. By the way, I’m not assuming that zombies are metaphysically possible. I assume zombies are conceivable, but I’m neutral on whether conceivability implies possibility. My arguments for the epistemic significance of consciousness are designed to be neutral about the metaphysics of consciousness, e.g. dualism versus physicalism. (More on this in Chapter 1.)
So I don’t know a lot about the phenomenology of blindsight, but what if their guesses are made on the basis of some sort of (phenomenally conscious) feel–just not a specifically visual feel? I think that’s what I had pictured–that someone with blindsight would just “have a sense” that the stimulus was one thing or another. Is that right? If it is right, do you count it as seeming to the blindsighter that p , in the relevant sense?
Ash, there are two kinds of blindsight. In type 1, subjects report having no experience of the stimulus at all. In type 2, subjects report having some experience of the stimulus, although it may be strange or degraded in various ways. In the book, I just focus on type 1, since that’s the kind of blindsight that’s most relevant to my argument.
Even so, it’s interesting to think about type 2. If it perceptually seems to the blindsighted subject that p, but in some non-visual way, then I’m happy to say the subject has some defeasible justification to believe that p, although (i) weak seemings provide only weak justification, and (ii) this is defeasible by background evidence that the seeming is unreliable.
If the subject just has a cognitive sense that p, i.e. that’s what he’s inclined to guess, then I don’t think it can play the same epistemic role. Some philosophers (e.g. Michael Huemer) think there are cognitive seemings as well as perceptual seemings that can play the same epistemic role, but I argue against this claim in the final chapter of the book, i.e. Chapter 12: “Seemings”.
Thanks for this interesting blog post, Declan. I wonder what you’d say about the following case:
Suppose you have a super blindsighter, who in the course of exercising their abilities come to associate a feeling of some kind with their inclination to form the accurate visual beliefs they do on the basis of their blindsight. E.g. they might say “I just have this feeling that I should form the belief there’s a cat there…and when I have that sort of feeling well what do you know but I’m always right.” But that feeling need have none of the richness of ordinary perceptual experience, it could just be a very minimal kind of cognitive phenomenology, e.g. If after a while, that cognitive phenomenology comes to indicate to them that they are having a blindsight “experience” of the relevant kind. In that kind of a case, would you think they can have non-inferential knowledge on the basis of their blindsight?
The bigger question that’s troubling me in the background is this: what does it really take for a perceptual experience to be phenomenally conscious? Pre-theoretically, I’d think it’s for it to have the rich, florid perceptual character that my experiences do in fact have. But the particularity of that phenomenal character seems incidental to the epistemic role you identify – of justifying both beliefs about the external world AND beliefs about perceptual experience. That role could be played by something that was really very minimal – it just needs to serve as an indicator that you’re exercising the kind of mental faculty that in fact gets you reliable information about the world… So my worry is that once we unpack or breakdown the epistemic role of phenomenal consciousness, we find it could be filled by something that doesn’t resemble the ordinary phenomenology of our perceptual experiences… Do you have any thoughts on that?
Thanks for your question, Jessie. I’ve read and learned a great deal from your own work in the philosophy of perception.
I definitely think the super-blindsighter can acquire inferential justification of the following form: (i) I’m inclined to guess that p, and (ii) I’m generally reliable, so (iii) it’s probable that p. But I would deny that feeling inclined to guess that p provides non-inferential justification to believe that p.
We might imagine the super-blindsighter feeling confident or even certain that p, rather than merely feeling inclined to guess that p. But still I’m inclined to deny that these cognitive feelings can provide non-inferential justification to believe that p.
One argument is that these cognitive feelings stand in need of justification: you might be unjustified in feeling confident or certain that p. And your feeling of confidence cannot justify belief unless it is justified by your total evidence, including everything else you believe. So cognitive feelings cannot provide non-inferential justification, since the justification they provide depends on what you have justification to believe.
The upshot is that cognitive experience cannot play the same epistemic role as perceptual experience: it cannot provide non-inferential justification for belief without standing in need of justification. It cannot be what Chisholm calls “an unmoved epistemic mover”. In the book, I argue that perceptual experience plays this role only because it has phenomenal character of the right kind, i.e. it has what I (and you?) call ‘presentational force’. I also agree with you that the degree of justification that perceptual experience provides is proportional to its degree of presentational force. I neglected to mention this in the post, so thanks for drawing me out on this, but it fits with the overall story.
Why not say *we* know that they are justified in believing X (where X is based on the inputs from the device), even if they do not? This seems the most natural move. Just because someone doesn’t know they are justified in believing X doesn’t mean they aren’t justified. I take this to be one of the great insights of externalism in general, and I have trouble not recoiling at all the Cartesian moves going on here.
How do we know that they are justified? Because we know the informational/causal relations obtaining between X and the world, which means that such relations are epistemically relevant, and in some cases sufficient.
Similarly, there is a lot of evidence of unconscious representations that affect current beliefs (e.g., backwards masking and lots of other examples). Also, a great deal of our tacit knowledge, that has strong effects on our occurrent inferences and beliefs, is not accessible even in principle, e.g., because you just don’t have the episodic memories any more for whatever reason (e.g., they were formed at an early age), but the basic cognitive infrastructure is still in place.
I would not want to tell people they are not justified, or to not trust their beliefs, just because they do not have conscious access to every node in some explicit inference. Especially in a more realistic, noisy, probabilistic framework for the evidence relation, this would seem a mistake and anachronistic given the work of people like Dretske/Goldman/Putnam/Burge, and also cognitive psychology. Conscious access can lead to wrong beliefs (e.g., visual illusions and hallucinations; for instance Dale Purves work). All this makes determining whether X is justified much more complicated and messy and probabilistic, but that seems how it should be, how it has to be.
While I like the project of trying to give an account of the epistemic impact of conscious experience (surely it is important, and neglected), it seems making it a necessary condition is a bit of an overcorrection. Why not just move to a more messy/pluralistic theory of justification?
Eric, you ask: why not say subjects with blindsight are justified in believing there’s an X in the blind field, although they don’t know they are justified? I have two main arguments.
First, subjects with blindsight are perfectly reasonable. And yet they don’t form beliefs about the blind field without relying on inference. But if your proposal is right, then they should. When they consider the proposition that there’s an X in the blind field, they should believe it, rather than withholding belief. And yet they don’t. Your proposal implies that these people are somehow being unreasonable or ignoring evidence that justifies belief. But that seems highly implausible. These are perfectly reasonable people, they just have a deficit in visual consciousness.
Second, your proposal implies that subjects with blindsight should believe abominable conjunctions e.g. there’s an X in my blind field, but I’m not justified in believing there’s an X in my blind field. But this is surely not the sort of thing you can be justified in believing. You shouldn’t believe things while believing you shouldn’t believe them. That is a form of akrasia just like having another drink while believing you shouldn’t have one. Akrasia is typically thought to be irrational.
The arguments sketched here and developed in the book are designed to undermine the kind of epistemic externalism you endorse. I look forward to seeing how proponents of epistemic externalism will respond to these arguments!
Declan: Thanks for your reply. As I mentioned I think this basic question and some of the moves you made are right on target and should probably be incorporated into our thinking about the relationship between conscious experience and knowledge: my concern is the scope.
You wrote that my externalist proposal ” implies that these people are somehow being unreasonable”. This isn’t quite right: they are just lacking knowledge. I wouldn’t support abominable conjuncts, but humble conjuncts like “There’s an X in my visual field, but I don’t know if I am justified in this belief and frankly don’t understand where it’s coming from.” And this gets back to my claim that it would be a mistake to undermine people’s claims to justification of X just because they don’t have conscious access to every node in some explicit chain of justification.
Disentangling subject-centered justification from objective justification (like you could discover and describe as a scientist, studying individual subjects, including non-humans in the lab) is the bread and butter of people studying sensory processes . In real-life the bifurcation of personal/conscious, versus objective, justification seems to happen all the time, likely way more than we know yet. E.g., as an extreme example, we could have a subject who takes conscious hallucinations seriously and acts on them: everyone around them knows they are not justified, but they (sometimes) think they are. While conscious experience is sometimes (often?) important, and may even seem subjectively crucial in our folk epistemology, when you step back and look at it from the perspective of someone studying justification in others, it just seems it can’t hold together with our best evidence and theory (especially once you move from the world of propositions deductively justifying other propositions to a more probabilistic view of evidence).
Thanks, Eric. Let me just push back on your replies to my arguments.
On blindsight: here’s why I think you’re committed to the implausible claim that blindsighted subjects are unreasonable. You don’t just say they lack knowledge about the blind field. You say their unconscious perception gives them justification to form beliefs about the blind field, despite the fact that they resolutely withhold belief about the blind field. But in general it’s not reasonable to withhold belief that p when you have evidence that justifies believing that p. To illustrate the point, consider a jury member who refuses to believe the defendant is innocent in the fact of evidence that justifies believing beyond a reasonable doubt that he is innocent.
On abominable conjunctions: you’re right that you can fall back on the weaker of the two forms of conjunction that I distinguish in my post, i.e. there’s an X in my blind field, but I don’t know whether I have justification to believe this. But I argue that this is rationally unstable too. It’s irrational to hold beliefs while being agnostic about whether they are justified. If you’re agnostic about whether you have justification to believe that p, then you cannot rationally maintain your belief that p. But your proposal has the implausible consequence that it’s fine for the blindsighted subject to do exactly that.
By the way, I’m not saying you need to know much about the causal structure of your own psychology in order to know that your beliefs are justified. When I recognize a familiar face, e.g. my mother, I know I’m justified in believing it’s my mother, but I don’t need to know how the fusiform face area solves the computational problem of recognizing faces. So I don’t think my view can be dismissed too quickly on the grounds that it imputes too much knowledge of psychological mechanisms.
On your hallucination case: one of the standard challenges to epistemic externalism is the new evil demon problem, i.e. an evil demon could make my beliefs systematically false without thereby making them unjustified by giving me a series of conscious hallucinations. As far as I know, none of the externalists you mention (e.g. Goldman, Dretske, Burge, Putnam) responds by denying that my beliefs could be justified on the basis of such hallucinations.
Declan: thanks for that the evil demon problem is interesting, but I am not really moved here I guess I have very strong externalist/reliablist intuitions. I don’t see how a good reliablist could say the deceived person is equally justified when it comes to beliefs about the external world/reference/expressions involving demonstrative and singular terms etc: the correct causal/informational relations don’t obtain. That problem seems to rely on strong internalist intuitions that I don’t share.
It’s like the converse of the super-blindsight, but we are injecting errors instead of information (in the technical sense) into your brain. The informational/causal link has been broken, and therefore so has the evidentiary relation that underlies (some types of) justification. Your experiences have ceased being reliable: if I were to learn I had been in this state I would say that tons of things I previously believed about what happened was not actually justified even though I thought it was. I was in the Matrix, and the claim that this has zero justificatory consequences seems bizarre to me. But that seems to be a consequence of your argument.
Maybe some philosophers haven’t absorbed their own externalism deeply enough? That said, I would probably push a distinction between narrow and wide justificatory practices, paralleling what we see in philosophy of mind/content. That is, narrowly justified beliefs are those which would survive the demon attack. E.g., beliefs about conceptual/logical/mathematical bits. There is a lot of inference and justification that could go on narrowly.
As I said I’m a pluralist I am not making a necessity claim about all types of beliefs (I am not saying all beliefs rely on an informational/causal link with something in the world), only arguing against a necessity claim involving conscious experience.
“You say their unconscious perception gives them justification to form beliefs about the blind field, despite the fact that they resolutely withhold belief about the blind field. But in general it’s not reasonable to withhold belief that p when you have evidence that justifies believing that p. ”
Again, the claim is that we (outsiders) know they are justified, but they do not. It seems reasonable that they would withhold belief in that case.
” It’s irrational to hold beliefs while being agnostic about whether they are justified.”
I do this all the time it seems reasonable to me. 🙂
“By the way, I’m not saying you need to know much about the causal structure of your own psychology in order to know that your beliefs are justified. ”
Yes I didn’t take you to be saying this, but rather arguing that you need conscious access/perceptual basis for justification which would be very different. It is the conscious access necessity claim that I’m arguing against.
After some reflection, I do think the question to pin me to the wall with is, “Can there be false, justified beliefs?” because I’ve been writing as if this is not possible. But that seems too extreme, to say the least. The standard line is after an informational baptism (where falsity is hard to come by), representational systems can then run off line, detached from their original causes (e.g., Dretske). My hunch is I would need to admit of grades here: someone whose visual system is functioning normally (in a teleological sense) who has a false belief evoked by sensory state S (whether it is conscious or not), is justified. But the evil demon case is not Normal, nor is the Matrix. So I do think I’ll have to revise things a bit. This still would not be to say conscious experience is necessary, but that I need to relax some of my stronger claims above that made it seem like false justified beliefs are not possible.