Stereotyping, Rationality, & the Cognitive Architecture of Virtue

Alex Madva
Cal Poly Pomona

Tamar Szabó Gendler (2008, 2011), and subsequently Andy Egan (2011), have argued that implicit biases pit our moral and epistemic aims against each other.  They cite research suggesting that the strength of implicit biases correlates with the knowledge individuals have of prevalent stereotypes, even when individuals reflectively disavow those stereotypes (here’s a relatively recent literature review).  In other words, simply knowing what the stereotypes are seems to make individuals more likely to unwittingly act as if they believed those stereotypes were true.

If mere knowledge of stereotypes leads to biased behavior, should we try to erase this knowledge from our minds?  Surely not: being ignorant of stereotypes would prevent us from being able to recognize when someone is wronged by virtue of being perceived in a stereotypical light.  If, then, we retain our knowledge of stereotypes, must we resign ourselves to acting in unwittingly biased ways?  On Gendler and Egan’s interpretation, we’re stuck in a tragic normative dilemma, with epistemic goods (e.g., the value of knowing about pervasive stereotypes) in opposition to ethical goods (e.g., treating people fairly).

My chapter (in the Implicit Bias and Philosophy volumes edited by Brownstein and Saul) argues that the dilemma here is not tragic.  It is possible to maintain and pursue knowledge of the social world, including knowledge of stereotypes, without that knowledge being internalized in such a way that it spills over into our unreflective behavior, and leads us to act in biased ways.  A central, overlooked variable is accessibility, which refers to the likelihood and ease of knowledge being recalled in particular situations.  Knowledge of stereotypes does not just as such translate to biased behavior.  The trouble arises when that knowledge springs to mind (i.e., is “accessed” or “activated” or “retrieved”) even though it is irrelevant.  For example, on a shooter or weapon bias task, the aim is to quickly identify weapons vs. harmless objects.  Research suggests many participants’ are, e.g., more likely to identify harmless objects as weapons when they are held by black, Latino, or Muslim individuals.  On this specific task, social identity is perfectly irrelevant, but the stereotypes are accessed and applied all the same.

Our goal should not be to erase these stereotypes from our minds, but to access them when and only when they are relevant (e.g., when it is important to respond to someone who has unjustly and inaccurately perceived a young black man wearing a hoodie as a threat).  As it turns out, an abundance of research suggests that accessibility can be regulated (the best summary of this research remains this 2010 paper by Gordon Moskowitz: everyone who cares about implicit bias should read it!  Another classic is this 2003 paper by Kunda and Spencer).  Accessibility is, e.g., highly goal-dependent.  For example, the other day in my car, I was waiting at a red traffic light.  The green left-turn arrow signal turned on and I momentarily started to take my foot off the brake, even though I wasn’t turning left.  Here, my goals, habits, and antsy mood made the “green light” signal highly accessible and led me to initiate a contextually-inappropriate motor routine.

What comes to mind is not just a matter of our local, context-specific motivations.  Accessibility strongly depends on our long-term and deep-seated goals and commitments (whether these commitments are explicitly reported or implicitly manifest).  This is at least one connotation of the phrase: “when you’re a hammer, everything looks like a nail.”  Of particular relevance, research suggests that chronic commitments to fairness, egalitarianism, and controlling one’s prejudices reduce expressions of implicit bias.  In this way, although our explicit attitudes, such as our sincere disavowals of prejudice, are not usually capable of extinguishing our implicit biases, they nevertheless exert a powerful influence over whether our implicit biases are activated and applied in particular contexts (Keith Frankish’s earlier post sketches one possible way this might take place).

Gendler, Egan, and, in fact, many social scientists seem to under-appreciate the malleability of stereotype accessibility.  They seem to treat the chronic accessibility of stereotypes as a given, an inevitable consequence of living in a society filled with biased mass media and structural inequality.  But the research clearly suggests that there is much that we (yes, even we puny individuals with finite minds!) can do to influence whether and when social knowledge springs to mind and influences behavior.  We can make headway toward increasing our social knowledge, and toward being less biased, simultaneously.  These cases thus do not involve a tragic moral-epistemic conflict.  (I would not argue that moral and epistemic aims can never conflict.  Plausible examples of conflict include many (actual and possible) experiments on human and non-human animals, which can potentially enhance our scientific knowledge while causing unjust suffering.  I don’t think the case of implicit biases is relevantly similar.)

It is a mistake, then, to treat the accessibility of our knowledge as a mere “brute fact” about our minds.  What we remember, forget, notice, or ignore is closely related to what we care about, and how strongly we care about it.  Appreciating this may have implications for: (1) moral responsibility, (2) the epistemology of ignorance, (3) the cognitive architecture of virtue, and (4) the predictive power of measures of implicit attitudes.

(1) The goal-dependence of accessibility perhaps lends some additional support to, e.g., Angela Smith’s claim that individuals can be morally responsible for the “spontaneous” and “passive” aspects of their mental lives, such as forgetting a close friend’s birthday.  Such omissions of cognition are more plausibly “attributed” to our “real selves” insofar as what does and doesn’t come to mind is a function of our caring.  (See Volume 2 for more on moral responsibility and implicit bias.  I’m not saying that any particular instance of forgetting someone’s birthday entails that you don’t care about that person.  We are dealing with probabilities and patterns of behavior.  There will, of course, also be complications regarding individual differences, contexts, extenuating circumstances, etc.  An absent-minded person, or a person with dementia, does not “care less” about her friends than a Jeopardy! champion who is extremely good at remembering birthdays.  What to say about this person with OCD who is plagued by constant violent fantasies?  May a thousand ceteris-paribus clauses bloom!)

One example of a philosopher who arguably comes close to treating accessibility as a mere “brute fact” of our minds is Neil Levy (2014), who argues that awareness of the morally significant facts of our actions is necessary for being responsible for those actions.  Simplifying a bit, Levy defines awareness partly in terms of “effortless and easy” cognitive access (33).  He writes: “Some dispositionally available representations require a great deal of effort to retrieve and some come to mind unbidden. These differences are prima facie relevant to agents’ moral responsibility; the degree of accessibility of information seems to correlate (roughly) with the degree of moral responsibility of the agent for failing to utilize it” (32).

Although I think Levy is onto something here, such claims seem to treat the easy and effortless accessibility of knowledge as a kind of uncaused cause, which grounds responsibility without itself depending on anything significantly related to an individual’s “real self,” or to factors within an individual’s control.  But the social-psychological research suggests that what we care about, how we live our everyday lives, and all the reason-responsive angels of our nature, are primary determinants of what springs to mind on particular occasions.

(2) Research on the goal-dependence of accessibility may also shed light on the “epistemology of ignorance” (Mills 1997, 18).  This research may help to flesh out the connections between, on the one hand, claims that people in positions of privilege tend to be indifferent toward the disadvantaged, and, on the other hand, claims that people in positions of privilege are ignorant of all the structural advantages and disadvantages at play.  Not caring feeds into not knowing which feeds into not caring, and so on.

(3) Under one guise or another, the importance of accessibility has long been known to philosophers of virtue.  Aristotle argues, for example, that akrasia results when an individual’s knowledge of what to do is temporarily inaccessible.  On John McDowell’s reading (1998, 17-18), the virtuous person’s knowledge of what he ought to do “silences” all competing considerations.  In other words, virtue entails that “the thing to do” in any given situation is accessible while competing considerations are inaccessible.  The virtuous person does not altogether forget these competing considerations; they just don’t come to mind when they are irrelevant.  And it is because they don’t come to mind that the virtuous person does not need to exert effortful self-control to avoid being influenced by those considerations.  (I take it that one doesn’t need to accept the Aristotelian-McDowellian theory of virtue in all its particulars to get the gist here.)  One might also think of Bernard Williams’ “one thought too many” case, regarding the person who has to reflect before deciding whether to save a drowning stranger or a drowning spouse.  Thinking about one’s duty or the moral calculus in such situations involves the wrong thought being accessed.

(4) The (allegedly) low predictive power of measures of implicit attitudes (alluded to in Edouard Machery’s earlier post) might be improved by factoring in accessibility.  We already know that individuals with reliably accessible egalitarian commitments are less likely to act on their implicit biases than others.  Another straightforward possibility is that individuals with chronically accessible implicit biases will be more likely to act in biased ways than individuals with less chronically accessible implicit biases.  However, accessibility is partly a context- and cue-dependent affair.  So it is also possible that person A’s implicit biases are highly accessible in context C but not in context D, whereas person B’s implicit biases are accessible in D but not C.  These possibilities might also help to illuminate what’s going on in the (allegedly) low correlations between different measures of implicit biases.  Different measures involve different cues, which might differentially activate different individuals’ implicit biases.