The Limitations of Implicit Bias

This post about epistemic in justice and implicit bias by Susanna Siegel is the third post of this week’s series on An Introduction to Implicit Bias: Knowledge, Justice, and the Social Mind (Routledge, 2020). Find the other posts here.


The first waves of research in psychology surrounding implicit bias claimed to identify a new kind of mental phenomenon in terms that were not sensitive to its subject-matter. On this picture, you could have an implicit bias about the relative value of giraffes over pigs, or Harvard over Yale, and these biases would be psychologically the same kind of thing as an implicit bias favoring white women over black men.

The subsequent focus on implicit bias in philosophy and psychology was often limited, in my opinion, in two major ways.

1. Categorization

First, by focusing on bias as type of mental state, it directed us away from important differences between instances of bias. You can get a room of people at Harvard to perform their preference for Harvard over Yale on an implicit attitude test, and the category of implicit bias might help us understand quite a bit about their preference. This “bias” is unlikely to underlie preferential hiring, unlike a racialized and gendered bias against black men or women.

The flip side of this observation is that a “bias” favoring white women over black men is the tip of a cultural iceberg. We need more analytic tools than implicit bias to understand how racialized and gendered forms of domination manifest and operate in an individual’s mind. 

While psychological category of implicit bias drew attention to the important fact that racialized and gendered forms of domination operate in individuals’ minds, anyone who relied on this category as the main lens for studying it would end up with a distorted picture.

An example of such distortion is in the research question: how can implicit bias help explain the pattern of disproportionately violent responses on the part of police officers to black and brown people? The “shooter studies” are most naturally seen as addressing this question, and several such papers begin by reminding us that Amadou Diallo, a 23 year old man, was shot by police while reaching into his pocket for his wallet. The participants’ task is deciding whether to shoot, and their choice is binary. There is no “de-escalate” option; one is meant to press the button indicating “shoot” if the target in the experiment (shown on a video) is perceived as “threatening”, and “don’t shoot” otherwise.  The experiments aimed to isolate “ethnicity” to see if police in this setting were more likely to shoot a man who they perceived as black or brown as opposed to white, holding all else constant.  

No research dictates the conclusions one draws from it. But by framing their discussion around actual shooting deaths at the hands of police such as Amadou Diallo’s and excluding discussion of other factors, these studies invite us to conclude that if we found bias in these studies, that bias would help explain the brutality imposed by the police on race-class subjugated communities, and if we didn’t find such bias, then police officers in the studies are not inclined to treat black men differently. They don’t state these conclusions, but they invite them.

Such conclusions would be misleading. The set-up sets aside other factors important to explaining the pattern of brutality to which Diallo’s death belongs, including differences in which communities are policed in the first place. It potentially conflates “shooting” with “responding to potential threat”, when in fact potential threat comes in degrees along many parameters, different parameters call for vastly different responses. We know that the most brutal responses belong to a culture of policing in the U.S. that includes substantial explicit racism. Together with policy decisions that lead to over-policing of race-class subjugated communities, these factors shed more light on what it would take to reverse the persistent patterns of violence than implicit bias does.

2. Responsibility

A second limitation comes from asking moral questions about responsibility for the biased person’s mental states and behaviors arising from them. In the context of racism, by asking “Who is responsible, by how much, and for what?”, we make exoneration or blame the main options. Moral responsibility is an important and difficult topic that deserves the space it gets in philosophy curricula. But if we focus instead how it feels to be on the receiving end of domination, we shift away from scorekeeping about responsibility and toward a fuller understanding of social dynamics. This kind of understanding is the kind more likely to make a person actually take responsibility for the effects they have, whether those effects are unwitting or not.

Despite the pitfalls surrounding the study of bias, the fact remains that there is such a thing and it can influence perception. To study how the influence happens, it is best to consider it from at least three perspectives that call for different methodologies: from the receiving end of biased perception, using cultural analysis; from within the biased perceiver’s mind, using cognitive science; and from the perspective of epistemology.  That is what my entry does.


Learn more about the book, including its chapters with implications about criminal justice and policing from the recent series of blog posts over at Imperfect Cognitions.