Belief, willpower, and implicit bias

Keith Frankish
Visiting Research Fellow, The Open University

Jo sincerely affirms that black people are no less trustworthy than white people. Yet despite this, she consistently behaves in ways that reflect the assumption that black people are less trustworthy — subtly adjusting her behaviour towards black people across a wide range of contexts. Jo is implicitly biased, though not explicitly so. Experimental studies suggest that we all exhibit implicit biases of various kinds, and everyday life throws up many cases of such apparent hypocrisy.

Implicit biases are often thought of as the result of learned associations, reflecting cultural stereotypes. Perhaps some are like this. But there is reason to think that many are the product of biased beliefs. There is evidence that biased responses on tests can be modified by argument, evidence, and logical considerations, indicating that they are the product of propositionally structured beliefs (Mandelbaum, 2015). Moreover, in everyday cases like Jo’s, biased behaviour often varies with context in ways that are sensitive to the agent’s goals and background beliefs — again indicating the operation of a belief. There is a strong case, then, for saying that Jo believes that black people are untrustworthy. Yet she sincerely asserts the opposite, and sincere assertions are usually taken to manifest belief. Indeed, Jo would avow that she believes that black people are not untrustworthy and would be affronted if we questioned this. In short, Jo seems to have flatly contradictory beliefs.

We might explain this by saying that the two beliefs are of different types and play different roles. The biased belief is unconscious and guides Jo’s unreflective behaviour, whereas the unbiased one is conscious and guides her consciously planned actions (Frankish, 2004). This view might be buttressed by appeal to a dual-system theory of reasoning, according to which behavioural control can switch between a fast, automatic, intuitive system (System 1), which generates default responses, and a slower, controlled, reflective system (System 2), which can produce more considered responses (e.g., Evans, 2011; Evans and Over, 1996; Kahneman 2011; Stanovich 2004).

This is a relatively comforting picture. We tend to identify with our conscious attitudes, not our unconscious ones. It is the unconscious autopilot that it biased, uncritically absorbing cultural stereotypes, and not us, the conscious pilot, who has reflected on the matter. Moreover, with effort, we (System 2) can override the autopilot and ensure that we behave fairly.

There is a catch, however. For it may be that we do not really have conscious beliefs. There is a strong case for thinking that the only mental states that are conscious are sensory ones and that what we call conscious thoughts are simply sensory images in working memory, typically images of utterances (inner speech) (Carruthers, 2011, 2014). These images may have effects on our behaviour (we ‘hear’ our imaged utterances and respond to what we hear), but they do so only indirectly, through promoting the formation of related unconscious beliefs and desires. If this is right, then all belief is unconscious. We have no conscious access to our own beliefs, and our knowledge of our own minds is derived from rapid but fallible self-interpretation.

This yields a rather bleak picture of implicit bias. Jo believes that black people are untrustworthy and has no countervailing conscious belief that could override it. Her assertions that black people are not untrustworthy are motivated by some other belief, perhaps that social norms require her to make them. (There need be no conscious deceit involved in this. Jo’s knowledge of her own beliefs is derived from self-interpretation, and if she focuses selectively on what she says, she may conclude that she believes that black people are trustworthy. When she ascribes that belief to herself she is not lying, though she is wrong, and if the ascription were challenged, her affront would be genuine.)

I think there is a good chance that this analysis is correct and that implicit bias often involves such unconscious hypocrisy. But even so, the picture may not be as bleak as it seems. For it does not follow that we cannot override our biases. Even if Jo does not have an unbiased conscious belief that can override her unconscious one, she may have other unconscious beliefs and desires that can do the job. Just as a desire to adhere to social norms may lead her to make assertions that belie the biased belief, so other desires may prompt her to suppress the belief’s effect on her nonverbal behaviour.

Here’s how it might work. Suppose Jo reflects on the evidence for thinking that there are no racial differences in trustworthiness. She finds it strong and tells herself (uttering it in inner speech) that black people are no less trustworthy than white people. This verbal act does not itself constitute the belief that black people are not untrustworthy, and it might not produce that belief either. (The opposite belief may be deeply rooted and hard to shift.) But it might lead Jo to form a belief that has very similar effects. For, in the context, Jo’s utterance expresses a commitment. In telling herself that black people are no less trustworthy than white people, she is committing herself to the truth of that view — to upholding it and acting in line with it. (She has, as we say, made up her mind about it.) And the utterance will naturally produce the belief that she has made that truth commitment. If so, then, assuming she wishes to adhere to her truth commitments, Jo will be motivated to act in an unbiased way. The belief about the truth commitment she has made will have much the same corrective effects on her behaviour as the unbiased belief about black people would.

Of course, if Jo retains the biased belief itself, she will also be motivated to act in a biased way; what she actually does will depend on the relative strength of the two competing motivations. Here’s an example. Jo is hiring a builder. Peter and Paul both come highly recommended, but Peter is white and Paul is black. Jo naturally wants to hire the most trustworthy candidate, so, since she believes that black people are less trustworthy than white people, she is motivated to choose Peter. But she also desires to stick to her truth commitments and believes that she is committed to the view that black people are no less trustworthy than white people, and these attitudes incline her to make the decision in an impartial way, such as by a coin toss. Other things being equal, what she actually does will depend on the relative strength of the desires involved. She will choose the coin toss only if her desire to adhere to her truth commitments (or at least to this one) is stronger than her desire to choose the most trustworthy candidate.

In general, where an unbiased commitment justifies one action and a biased belief another, the agent will perform the unbiased action only if their desire to adhere to their truth commitments is stronger than the desire that motivates the biased action. (Where the latter desire is itself a perfectly reasonable one, as in Jo’s case, the former may need to be particularly strong.) We can put this conclusion in more familiar terms. Truth commitments are what we call principles, and the desire to adhere to one’s truth commitments is what we call willpower or strength of character. We can override our implicit biases if we adopt egalitarian principles and have the strength of character to stick to them.

This view supports some distinctive predictions. Suppose we have a subject like Jo who is implicitly biased but expresses unbiased views. If the proposed view is correct, then we should be able to modify the degree of bias they exhibit by manipulating the relative strength of their desire to adhere to their truth commitments. We should be able to reduce their bias by offering reminders of the reasons for making the relevant truth commitment and by priming them with suggestions of the importance of commitment, integrity, consistency, self-discipline, and strength of will. And, conversely, we should be able to increase their bias by encouraging them to trust their gut feelings and by priming them to go with the flow. not take things too seriously, and suchlike.

We like to think that implicit bias does not reflect our true beliefs. We may be wrong about this, but it does not follow that we are prisoners of our implicit biases. For we can make commitments to better views, and, if sufficiently strong willed, stand by them and constrain ourselves to act fairly.


Carruthers, P. (2011). The Opacity of Mind: An Integrative Theory of Self-Knowledge. Oxford University Press.
Carruthers, P. (2014). On central cognition. Philosophical Studies, 170(1), 143-62.
Evans, J. St. B. T. (2011). Thinking Twice: Two Minds in One Brain. Oxford University Press.
Evans, J. St. B. T. and Over, D. E. (1996). Rationality and Reasoning. Psychology Press.
Frankish, K. (2004). Mind and Supermind. Cambridge University Press.
Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
Mandelbaum, E. (2015). Attitude, inference, association: on the propositional structure of implicit bias. Noûs. doi: 10.1111/nous.12089
Stanovich, K. E. (2004). The Robot’s Rebellion: Finding Meaning in the Age of Darwin. University of Chicago Press.

Back to Top