Spontaneity can give rise to actions that seem “unowned.” Tremble and shake on a glass platform suspended over the Grand Canyon, and you might ask, “I know I’m perfectly safe, so why is my body acting like I’m not?” Have a friend point out to you that when asking questions at a conference you address the men as “Professor So-and-So” and the women as “Alice” and “Margarita,” and you might think, “that’s terrible, since I really do believe that men and women speakers are due equal respect.” In both cases, maybe because your behavior seems uncontrollable, or you were unaware of it, or it conflicts with your considered judgments, what you do conflicts with who you (think?) you are.
The same dynamic plays out in cases of virtuous spontaneity. People who have acted heroically and spontaneously—like Wesley Autrey, the “Subway Hero” who jumped on the tracks to protect a man who had fallen while having a seizure—often say that they their action came “out of nowhere.” They never thought of themselves as heroic. They “just reacted.” In sports, too, the most exemplary stars often explain their imaginative and innovative moves in the most banal and impersonal ways. “I don’t know how I did it,” Kimberly Kim said, after winning the U.S. Women’s Amateur Golf Tournament, “I just hit the ball and it went good.”
Spontaneity can give rise to actions that seem unowned because many of these actions lack the usual trappings of responsible agency. And yet they are actions nonetheless, not utterly divorced from the persons who do them. The central aim of Part 2 of The Implicit Mind is to make the case that paradigmatic spontaneous action are “attributable” to agents, and then to consider the ramifications of this claim for questions about moral responsibility and the nature of the self. In saying that paradigmatic spontaneous actions are attributable to agents, I mean that they reflect upon the character of those agents. They license “aretaic” evaluations, like “honorable,” “selfish,” “imaginative,” or “cowardly.”
There is, of course, extensive literature about the meaning of attributability (and associated terms, in particular, “accountability” and “answerability”), about the appropriate conditions of its application, and so on. In wading through this literature, I develop my own not-entirely-novel approach, based on the notion of “caring.” The basic idea here is that an action can reflect on your character when it reflects upon something you care about. To care about something is for that thing to matter to you, in some sense. Caring can be quite basic, though; on my view, we can care about things without knowing it, or despite judging that we don’t. Caring is about having motivational and affective ties that persist over time in one’s thoughts, feelings, and action. The upshot of this is that spontaneous actions are not just instrumentally valuable or regrettable, or good and bad as such. Evaluations of these actions need not be shallow or merely “grading.” In virtue of them, we may be admirable or loathsome, compassionate or cruel, worthy of respect or disrespect.
This conclusion raises a number of difficult questions, though. I take my stab at each of the following. What exactly is the relationship between “cares” and action, such that an action can “reflect upon” what one cares about? When an action reflects upon what one cares about, is one thereby responsible for that action? If spontaneous actions are attributable to us, does that mean that they reveal who we really truly are, “deep down?” And finally, what role ought agents’ considered judgments play when we think about attributability and spontaneity, if actions that conflict with our considered judgments can be attributable to us?
I like this notion of ‘caring’ – but, if you’ll forgive me, what sort of semantics does caring have? I’m thinking of worrying that another person ‘doesn’t really care about me, just about their idea of me’, or whether someone can care about Mark Twain but not care about Samuel Clemens.
(The lurking worry goes something like this: I’d have expected that high-level, self-conscious ‘caring about X’ can count as the same kind of thing as what implicit attitudes reflect on. Plausibly, high-level self-conscious ‘caring about X’ is intensional, tied to a particular description or mode of presentation of the thing cared about. But it might seem like implicit attitudes don’t have the subtlety to do that – if I ‘care about X’ just in the sense of ‘feeling good whenever I’m around them, even though I don’t realise it’s happening’, that sounds like it’s just a sort of response to a stimulus. But then it’s not clear that high-level and low-level caring can be directed at ‘the same thing’, since they have such different types of object. Not sure what I think of any of this, just wondering if you have a ready-made answer.)
Super interesting! And sorry, I most definitely do not have a ready-made answer. One thing I could say is that I don’t think implicit attitudes are mere responses to stimuli. For one, there’s mutual influence of “high” and “low” level processes here. Your explicit goals, beliefs, etc. affect the structure and association of your implicit attitudes. So perhaps there’s not as much isolation between the target objects of your implicit and explicit mental states as your lurking worry suggests.
Hi Michael. Thanks for writing about your book. I read much of it this summer but didn’t quite finish Chapter 4 where you discuss cares. (And I’ve been on maternity leave for 6 months, so my memory is hazy.) So this is a nice opportunity to ask you a lingering question I’ve had about cares, deep-self theories, and implicit bias. You cite Chandra Sripada’s work on a deep self theory of moral responsibility. I actually teach this in conjunction with articles on implicit bias in my phil mind class. You can draw out an interesting account of our responsibility for (some) implicit biases from his account. On the the view of cares Sripada and you (I think) endorse, cares are relatively stable, unlike desires which are heavily situationally dependent. We are morally responsible for actions that result from motives that express our cares. Thus, we are morally responsible for behavioral manifestations of implicit bias (and maybe implicit bias itself) if they are an expression of what we care about. But of course we have lots of cares and many of them conflict in particular situations. How we resolve these conflicts is evidence of which cares are more higher-ranking or central. So here’s the worry. Our actions and motives are *highly* situationally dependent. How we resolve a conflict of cares will vary situation to situation. I’m not claiming there is no consistency for any care, but the situational context is highly influential in how we resolve conflicting cares. And this makes it hard to see which cares are more central, which makes it difficult to morally evaluate behavioral manifestations of implicit bias. So how do we reconcile the context sensitivity involved with acting on cares (and thus determining which cares are more central) with tying moral responsibility to cares?
Sorry this is such a long buildup to a question. (If this were a colloquium, I’d be *that* guy.) But hopefully you can shed some light on how to think about these issues!
Hey Shannon,
Thanks for working through the book and for your question. And hope maternity leave has treated you well – congrats!
I love Chandra’s work, and I think his account of caring and responsibility is terrific. As you say, I draw a lot of inspiration from it. I’m not sure my view and his are completely consistent, but I think you’ve got everything right in the set-up to your question.
I agree that our cares can be internally conflicting, and I also agree that, at least in part this is due to the fact that our cares are highly context-sensitive. I’m not sure that there is a way to reconcile this with the idea of typing moral responsibility to cares, if moral responsibility follows straightforwardly from what we care about. But maybe it doesn’t follow so straightforwardly.
As I think about it — and honestly, I’m not sure how satisfactory or coherent this ultimately is — caring establishes that moral evaluation is in principle appropriate. It puts you in the zone, so to speak, in which one could, appropriately, blame or praise you. But whether to blame or praise a person, and how to do so, is, I think, a separate question. And that separate question depends, I think, on more than just what one cares about. It depends on a whole bunch of questions, e.g., what standards do I appropriately hold the person who has done the blame/praiseworthy thing to? What is my relationship to that person? Etc.
So in some implicit bias cases, I think it’s plausible to think that I (as the person who is biased) am open to others’ moral responses (or “aretaic” responses) if I act in some problematic ways, even if my biases conflict with other genuine aspects of my character. Sorting out the hierarchy of these cares might not be the route to determining how to respond to me, though. You don’t need to know which is the “true” me, or even the “truer” me, I don’t think, to figure out how to treat me. More important, I think, is whether I know (or ought to know) about my biases, whether you are my friend or colleague or whatever, what the effects of various responses to me might be, etc. In the way that I (maybe idiosyncratically) use the lingo, those are all questions about how to hold me accountable, not questions about whether the bias is attributable to me.
Hope that makes sense!
Ah, this is great. Thanks. I haven’t focused as much on responsibility issues, so this is very useful to think about.
Great!
“And yet they are actions nonetheless, not utterly divorced from the persons who do them
who/what is the person who did them in your view, is the “person” somehow other than the individual at hand?
No, nothing that fancy. All I mean is that these are not things that happen to people, in a completely agency-undermining way. They are not like brainwashing cases, for example.
I see then not really divorced at all, just conflicted (conflicting?) from the point of view of the person experiencing them. Are there really brain-washing cases?
Well, something that I do, but that doesn’t in any way reflect who I am, or doesn’t stem from some responsibility-bearing feature of my identity, might truly be “divorced” from me. Whether I experience a conflict between (for example) what I do, and what I want to do, or think I ought to do, might be a separate question.
There may really be brainwashing cases, depending on how you want to define brainwashing. The fanciful examples philosophers sometimes use might not be real, but arguably a child raised in a highly blinkered environment — say, a cult that deprives the child of basic knowledge of the world, or raises the child with racist ideology — could be said to be brainwashed.