Section 3 of The Implicit Mind asks: how can we improve our implicit minds? What can we do to increase the chances that our spontaneous inclinations and dispositions get it right rather than act as conduits for bias and prejudice? It is tempting to think that one can simply reflect carefully upon which spontaneous inclinations to trust and which to avoid. But in real-time, moment-to-moment life, when decisions and actions unfold quickly, this is often not plausible. How then can we act on our immediate inclinations while minimizing the risk of doing something irrational or immoral? How can we enjoy the virtues of spontaneity without succumbing to its vices? A poignant formulation of this last question surfaced in an unlikely place in 2012. In the climactic scene of Warner Bros.’ The Lego Movie, with the bad guys closing in and the moment of truth arriving, the old wise man Vitruvius paradoxically exhorts his protégé, “Trust your instincts . . . unless your instincts are terrible!”
I begin the section with a negative answer: we won’t improve the ethics of our implicit minds by deliberation alone. That is, reflecting on the reasons we do things is insufficient for acting in more ethical ways when we act spontaneously. Others have made similar claims, notably Nomy Arpaly and Tim Schroeder, as well as Peter Railton. I make what I think is a stronger claim than they have, though, by arguing that deliberation is neither necessary nor always recommended for ethical action. I do so by going beyond Arpaly’s “inverse akrasia” cases, in which a person acts rationally despite acting against her best judgment. In these cases, while the agents in question (such as Huckleberry Finn) represent apparent counterexamples to the claim that deliberation is always necessary for rational action, they plausibly would have been better off had they simply been better deliberators. In the best-case scenario, that is, Huck would have bothfelt spontaneous revulsion at the idea of turning Jim in and believed that slavery is immoral. I introduce cases, however, in which even perfect deliberation itself can undermine ethical action. I draw here on previous work of mine with Alex Madva, as well as work by Jason D’Cruz, on the costs of deliberation, even perfect deliberation.
For example, D’Cruz imagines Moritz, who is riding the train from Berlin to Dresden, but decides at the last second to skip his stop and ride to Zittau, despite thinking that he probably should get off in order to stick to his plans in Dresden. “As the train starts, [Moritz] feels an overwhelming euphoria come over him,” D’Cruz writes, “a powerful sensation of freedom that, in retrospect, easily outweighs his other reasons for going to Dresden” (2013, 33). Moritz has good reasons for skipping his stop, assuming that feeling free and euphoric contribute to Moritz’s desire to lead a good and happy life. But, D’Cruz asks, consider the outcome had Moritz deliberated about this choice. He might have thought, “If I continue to Zittau, I will feel an overwhelming sense of euphoria and freedom.” Had Moritz contemplated this conditional, he would have rendered it false. The problem with deliberation, in a case like this, isn’t just that it can be distracting or that it isn’t feasible when you must make a quick decision. The problem is that sometimes spontaneity itself is what we value.
As in previous sections of the book, I then try to show how these points apply across the spectrum to both the virtues and vices of spontaneity. Here, I review the extensive and growing literature on the self-regulation of bias. I begin by considering the role of self-reflection and reasoning in bias-reduction interventions of various kinds. While crucial in some contexts—such as understanding the wrongs of discrimination—it is often unclear what role deliberation is really playing in changing one’s biases and behavior. And, more worryingly, there is evidence that reflecting on our biases can sometimes lead them to become more entrenched.
So, the final part of the book develops an alternative: repurposing terms from Daniel Dennett (and admittedly using them in ways quite different than him), I consider characterizing the self-regulation of our implicit attitudes as a way of taking the “intentional stance,” the “physical stance,” or the “design stance” toward ourselves. Roughly, these mean treating our implicit attitudes as if they were things to be reasoned with, things to be manipulated as physical objects, or things to be re-engineered. Because changing our implicit attitudes in fact requires a hybrid of all three of these approaches, I coin a new term for the most effective strategy. In attempting to improve our implicit attitudes, we should adopt the habit stance. This means treating these mental states as if they were habits in need of (re)training. I provide an architecture of the most empirically well-supported techniques for (re)training our spontaneous inclinations and automatic responses, focusing on (a) pre-commitment to plans, (b) attending to context, and (c) practice. These recommendations are culled from both psychological research on self-regulation—including the self-regulation of bias, health behavior, addiction, and the like—and from research on coaching and skill acquisition. I consider objections to each of these recommendations and then I discuss the broader worry that adopting the habit stance is like cheating our way toward ethics (i.e., that adopting it somehow denigrates our status as rational agents).
The Implicit Mind leaves many questions open, for sure. I hope you’ll help me recognize and improve upon them!
D’Cruz, J. 2013. Volatile reasons. Australasian Journal of Philosophy, 91(1), 31– 40.