Three Studies That No Moral Philosopher Should Ignore

Robin Zheng
Newnham College
University of Cambridge

In my chapter, I argue that for cases of implicitly biased action, we should set aside questions of responsibility as attributability in favor of responsibility as accountability. As I interpret the distinction, the former constitute a problem in metaphysics and philosophy of action because we are interested in when and under what conditions an action reflects the exercise of a person’s (moral) agency; the latter, by contrast, represent a moral and political problem because we are interested in what sorts of burdens and duties are appropriate to assign each of us in virtue of membership in a moral community. These two “routes to responsibility” eventually converge, because the practices traditionally associated with responsibility, e.g. blame and punishment, are relevant to both sets of questions, albeit in slightly different ways. Along the attributability route, we deserve blame and punishment directly on the basis of actions that reflect whether we were acting badly or well qua moral agents. Along the accountability route, however, the primary concern is sorting out the costs of repairing harm, irrespective of how the agents involved acted – whether they deserve blame or punishment  on top of that is thus a further question that we might or might not ask (depending on various other moral and pragmatic considerations). So in the case of implicitly biased actions, I propose, we can stop short of asking whether those biases are attributable to a person in a way that would render her blameworthy, while still holding her accountable  for her biased action by assigning her a share of the burdens for compensating victims, changing policies for the future, etc.

Rather than summarize my chapter, which contains arguments for the view above, what I’d like to do in this post is present a few findings on implicit bias that I find highly interesting, and which I think are somewhat less well-known. (My slightly-exaggerated title is a spin-off of this paper by Jost et al 2009.)

***
#1 Mere exposure can decrease implicit bias

(Shook and Fazio 2006) “Interracial Roommate Relationships: An Experimental Field Test of the Contact Hypothesis”

In this real-world natural field experiment, Shook and Fazio examined White university first-years who were randomly assigned either a Black or White roommate, measuring their levels of anti-Black implicit bias with an evaluative priming task during the first and last two weeks of their first quarter at the university. What they found was that at the end of the quarter, students with Black roommates explicitly reported that they were less satisfied, less involved, and less comfortable with their roommates than those with same-race roommates.  However, they did report lower levels of intergroup anxiety, and their levels of implicit bias actually decreased.

These findings are corroborated by Tam et al 2006, who found that mere quantity (but not quality) of contact with elderly people was correlated with lower levels of anti-elderly implicit bias in university students. Turner, Hewstone, & Vico 2007 also found that while decreases in explicit bias in elementary and high school students were correlated with high-quality White-South Asian interactions, mere exposure was sufficient for decreasing IAT scores even in contexts of cross-racial tension.

#2 Implicit bias increases when you’re (or used to being) on top

(Richeson and Ambady 2003, 2001) “Effects of Situational Power on Automatic Racial Prejudice”/”Who’s in Charge? Effects of Situational Roles on Automatic Gender Bias”

In a pair of experiments, Richeson and Ambady tested the effect of situational roles on university students’ implicit racial/gender biases. Participants were told that they would be working with a partner on a computer task, and they were randomly assigned either a superior or subordinate role (plus an additional “peer” condition in the gender study), along with the instructions: ‘‘You are the superior and your partner is your subordinate. Therefore, you will be evaluating your subordinate’s task performance” or ‘‘You are the subordinate and your partner is your superior. Included in the role of a subordinate is being evaluated. Therefore, your superior will be evaluating your task performance.’’ They were then asked to sit for a Polaroid photo and fill out a Profile Sheet with demographic and personal (e.g. activities, hobbies, interests) information. Next, they were given Profile Sheets and Polaroid photos of their “partner,” which were actually generated by experimenters and varied by race/gender. Finally, they took a race/gender IAT under the pretense that it was the computer task they were performing with their partner. What Richeson and Ambady found was that implicit racial bias (amongst their sample: all White females)  was higher amongst participants who anticipated working together with a “subordinate” Black partner than those who anticipated working with a “superior”; this effect did not appear when participants anticipated working with same-race “subordinates” or “superiors.” Similarly, they found negative implicit gender bias (amongst their sample: all White males) only when participants anticipated working with a “subordinate” White partner; their implicit attitudes toward women were positive for both “superior” and “peer” female partners.

Other studies have also indicated that the sensitivity of implicit bias to social context. Lowery, Hardin & Sinclair 2001 found that participants who interacted with a Black experimenter subsequently showed lowered levels of implicit racial bias compared to those who interacted with a White experimenter, and Castelli and Tomerelli 2008 found that the mere presence of other people also reduced levels of implicit racial bias. Wittenbrink, Judd, & Park 2001 found higher levels of implicit bias against Black faces when pictured against the background of an alleyway rather than a church. Similarly, Barden et al 2004 found higher implicit bias against Black faces than Asian faces in a classroom setting, but the opposite pattern on a basketball court, and that Black faces elicited higher levels of implicit bias than White faces did when they were depicted as prisoners, but not when they were depicted as lawyers.

#3 “Making progress” can license increased implicit bias

(Mann and Kawakami 2012) “The Long, Steep Path to Equality: Progressing on Egalitarian Goals”

Mann and Kawakami studied the effects of giving feedback on two measures of implicit bias. University students were first given a task in which they were instructed to “try to have positive evaluations of Black people whenever they were presented with an image of Blacks,” which they were told would be measured according to physiological data (heart rate, blood pressure, etc.) gathered from electrodes. This physiological data was fabricated by the experimenters in order to, depending on experimental condition, make participants believe that their performance was improving or not improving (or worsening, or given no feedback). Afterwards, participants engaged in a task where they were asked to get close to a Black partner by answering a series of personal questions. Participants were asked to move a chair into the cubicle and sit down with their Black partner (a confederate who was unaware of experimental condition). Thus the real behavioral measure was how close or far away they chose to sit next to their partner, along with an IAT. Mann and Kawakami found that participants made to believe that they made progress in positively evaluating Black individuals subsequently sat further away from their partner and showed higher levels of implicit racial bias than those in the no-improvement condition. These effects did not appear when participants were asked to positively evaluate White individuals, or when participants  believed that their performance was worsening (which resulted in no difference from those given no feedback at all).

These results are similar to those found by Effron, Cameron, and Monin 2009, in which university students who endorsed Obama for president were subsequently more likely to think that a White candidate should be hired for a police force currently undergoing racial tension. Monin and Miller 2001 also found that participants who had opportunities to behave in egalitarian ways (e.g. opposing a sexist claim) later felt licensed by their “moral credentials” to behave in more biased ways (e.g. choosing a male applicant for a stereotypically male job) than a control group.
***

Let me emphasize first that these results underscore the moral importance of efforts to address underrepresentation within educational, residential, and professional contexts. Moreover, the startling sophistication and detail with which implicit biases are able to reflect  existing social patterns, hierarchies, and relationships—in other words, their sensitivity to social reality—suggests to me that they should not be understood as things people carry around that allow us to sort out more and less biased people. (At the very least, it suggests that it will be very difficult in real life to determine whether and how great the influence of implicit bias in impacting a specific individual’s behavior on a specific occasion, or what her “baseline level” of bias would be.) I think these results also give us reason to be wary of individual de-biasing techniques or morally “credentialing/discrediting” assessments of whether/how biased an individual is. For these reasons and more (see my chapter!), I think we would do well to eschew attempts at ascribing responsibility as attributability for implicit bias, and focus instead on accountability for distributing the burdens of transforming the structural conditions that breed bias.

Photo credit: 5chw4r7z on Flickr 

One comment

  1. Alex Madva

    Hi Robin,

    Great post! I was wondering if you could say a bit more about how these three sets of important research speak to the various conclusions you draw. I probably agree that measures like the IAT should not be used to sort, in any conclusive credentialing way, biased people into non-biased people, but I’m not sure how all of this research speaks to that point.

    For example, re: #3 on backlash effects, moral licensing, etc. I think one point to make here is that awareness of backlash effects should be *incorporated into* bias reduction programs. In order to graduate from Implicit Bias University (or, for that matter, Situationism University), you should have to show an awareness that you are not and probably never will be totally “cured,” that these backlash effects are likely, that you will be more and less likely to be biased in certain contexts, and so on.

    (Of course, empirical research should explore whether licensing effects can be mitigated, e.g., by warning people about them. Research should also explore how long these licensing effects last. They themselves may be temporary. First you vote for Obama, then you support the white police chief, and then what? The real question, it seems to me, is whether, after a structural or individual intervention, the general arc of people’s behavior across many contexts tends to be more or less biased, not how people act in any particular case.)

    However, the broader point to make is that licensing effects, backlash, etc., seem like a “universal solvent” that can be used to object to any proposal (for virtually any intervention into anything). What you seem to have in mind is someone who thinks “Ok, I reduced my own personal biases by this technique. My IAT score now shows no bias. My work here is done” and then goes on to act in more biased ways (and doesn’t work for structural reform, distributing benefits and burdens, and so on).

    But what about someone who works successfully for important structural reforms? Won’t they *also* be likely to think, “Look, we achieved our goal. Our work here is done,” and then go on to act in biased ways (and not fight for further structural reform, or try to improve as an individual, etc.)? (Short answer: yes, e.g., here: https://shapiro.psych.ucla.edu/Papers_files/Presumed%20Fair%202013.pdf .) The risk of complacency is inherent, it seems, in any (even partly) successful effort at anything, whether it’s individual debiasing or structural reform against discrimination… or healthy eating or structural reform toward accessible healthcare, etc. The risk of complacency is arguably written into the nature of goal achievement. So I don’t see how this point cuts any specific ice against using the IAT as a measure of individual biases or against individual debiasing techniques or “certifications.” *Any* proposal for improving anything should try to incorporate efforts to prevent licensing, backlash, and self-satisfaction.

Comments are closed.

Back to Top