4. An Ethics of Spontaneity

Section 3 of The Implicit Mind asks: how can we improve our implicit minds? What can we do to increase the chances that our spontaneous inclinations and dispositions get it right rather than act as conduits for bias and prejudice? It is tempting to think that one can simply reflect carefully upon which spontaneous inclinations to trust and which to avoid. But in real-time, moment-to-moment life, when decisions and actions unfold quickly, this is often not plausible. How then can we act on our immediate inclinations while minimizing the risk of doing something irrational or immoral? How can we enjoy the virtues of spontaneity without succumbing to its vices? A poignant formulation of this last question surfaced in an unlikely place in 2012. In the climactic scene of Warner Bros.’ The Lego Movie, with the bad guys closing in and the moment of truth arriving, the old wise man Vitruvius paradoxically exhorts his protégé, “Trust your instincts . . . unless your instincts are terrible!”

I begin the section with a negative answer: we won’t improve the ethics of our implicit minds by deliberation alone. That is, reflecting on the reasons we do things is insufficient for acting in more ethical ways when we act spontaneously. Others have made similar claims, notably Nomy Arpaly and Tim Schroeder, as well as Peter Railton. I make what I think is a stronger claim than they have, though, by arguing that deliberation is neither necessary nor always recommended for ethical action. I do so by going beyond Arpaly’s “inverse akrasia” cases, in which a person acts rationally despite acting against her best judgment. In these cases, while the agents in question (such as Huckleberry Finn) represent apparent counterexamples to the claim that deliberation is always necessary for rational action, they plausibly would have been better off had they simply been better deliberators. In the best-case scenario, that is, Huck would have bothfelt spontaneous revulsion at the idea of turning Jim in and believed that slavery is immoral. I introduce cases, however, in which even perfect deliberation itself can undermine ethical action. I draw here on previous work of mine with Alex Madva, as well as work by Jason D’Cruz, on the costs of deliberation, even perfect deliberation.

For example, D’Cruz imagines Moritz, who is riding the train from Berlin to Dresden, but decides at the last second to skip his stop and ride to Zittau, despite thinking that he probably should get off in order to stick to his plans in Dresden. “As the train starts, [Moritz] feels an overwhelming euphoria come over him,” D’Cruz writes, “a powerful sensation of freedom that, in retrospect, easily outweighs his other reasons for going to Dresden” (2013, 33). Moritz has good reasons for skipping his stop, assuming that feeling free and euphoric con­tribute to Moritz’s desire to lead a good and happy life. But, D’Cruz asks, consider the outcome had Moritz deliberated about this choice. He might have thought, “If I continue to Zittau, I will feel an overwhelming sense of euphoria and freedom.” Had Moritz contemplated this conditional, he would have rendered it false. The problem with deliberation, in a case like this, isn’t just that it can be distracting or that it isn’t feasible when you must make a quick decision. The problem is that sometimes spontaneity itself is what we value.

As in previous sections of the book, I then try to show how these points apply across the spectrum to both the virtues and vices of spontaneity. Here, I review the extensive and growing literature on the self-regulation of bias. I begin by considering the role of self-reflection and reasoning in bias-reduction interventions of various kinds. While crucial in some contexts—such as understanding the wrongs of discrimination—it is often unclear what role deliberation is really playing in changing one’s biases and behavior. And, more worryingly, there is evidence that reflecting on our biases can sometimes lead them to become more entrenched.

So, the final part of the book develops an alternative: repurposing terms from Daniel Dennett (and admittedly using them in ways quite different than him), I consider characterizing the self-regulation of our implicit attitudes as a way of taking the “intentional stance,” the “physical stance,” or the “design stance” toward ourselves. Roughly, these mean treating our implicit attitudes as if they were things to be reasoned with, things to be manipu­lated as physical objects, or things to be re-engineered. Because changing our implicit attitudes in fact requires a hybrid of all three of these approaches, I coin a new term for the most effective strategy. In attempting to improve our implicit attitudes, we should adopt the habit stance. This means treating these mental states as if they were habits in need of (re)training. I provide an architecture of the most empirically well-supported techniques for (re)training our spontaneous inclinations and automatic responses, focusing on (a) pre-commitment to plans, (b) attending to context, and (c) practice. These recommendations are culled from both psychological research on self-regulation—including the self-regulation of bias, health behavior, addiction, and the like—and from research on coaching and skill acquisition. I consider objections to each of these recommendations and then I discuss the broader worry that adopting the habit stance is like cheating our way toward ethics (i.e., that adopting it somehow denigrates our status as rational agents).

The Implicit Mind leaves many questions open, for sure. I hope you’ll help me recognize and improve upon them!

References

D’Cruz, J. 2013. Volatile reasons. Australasian Journal of Philosophy, 91(1), 31– 40.

2 Comments

  1. Luke Roelofs

    Thanks Michael, can I ask you to say a bit more about how the ‘habit stance’ is a hybrid of those other three? And, maybe, here’s a worry I have: the design stance looks like it might swallow up the whole story, or else be circular here? It sounds a bit like ‘we can re-train out inclinations by… treating our inclinations are things to re-engineer.’ I’m not seeing quite the difference between engineering and training? If I know how to ‘re-engineer’ my inclinations, doesn’t that just amount to knowing how to re-train them?

    Also, a more concrete question: what do you think about simply exposing oneself to new things/people/situations, as per the ‘contact hypothesis’ that spending time with members of a group reduces prejudice against them? Does that fall under ‘attending to context’, or is that something else?

    • Michael Brownstein

      Sure, and thanks for all the great questions, Luke.

      As I think of it, and describe it in the book, “physical stance-ish” interventions mainly involve rote repetition and practice. Whatever your goals are, they’re the part of skill training that requires “just doing the damn work” (as the swimmer Katie Ledecky’s coach said). They operate on us as if we’re physical objects. For combating implicit bias, an example would be getting more exposure to counterexamples of stereotypes (like posters of women leaders), in order to shift your “statistical map” of stereotypes.

      “Design stance-ish” interventions mainly involve attending to context, social settings, and situations. They’re the part of skill training that derives from an understanding of the kinds of creatures evolution has designed us to be. For example, we’ve evolved to be hyper-attuned to status. Implicit bias affects us differently if we’re in a subordinate or superordinate position in some situation. Effective interventions take this into account.

      “Intentional stance-ish” interventions mainly involve goal-setting and planning. They’re the part that treats us as if our behavior is guided by ordinary folk psychology (e.g., explicit goals, values, beliefs, etc.). I use “implementation intentions” as an example in the book, since they involve pre-commitment to plans for how to achieve our goals.

      I’m not sure if this helps, but I don’t quite see the worry about the design stance swallowing up the others. Each is essential, as I see it, since changing our habits and behavior requires forming the right plans, which are attuned to the kinds of creatures we are, and practicing, practicing, practicing.

      The virtues of intergroup contact hopefully exemplify the way all three elements of the habit stance are essential. One has to choose to interact with people unlike oneself (intentional stance); one has to do it in the right contexts, or (research shows) it can backfire (e.g., interacting on equal-status terms; thus, design stance); and one has to simply do it a lot (once ain’t enough!; physical stance).

      As I emphasize in the book: I say all this with apologies to Dan Dennett, since I know I’m using his terms in ways pretty unlike the way he means them!

Comments are closed.