Implicit bias and moral responsibility: assessing responsibility

This post continues on from the last. I’m going to assume the claims I made in that post – in particular, that implicit attitudes are patchy endorsements – though in fact what I will say would go through on most rival views (the only extant view on which it would be false would be the Mandelbaum/De Houwer view, according to which implicit attitudes are structured beliefs). In the last post, I argued that we must avoid intuition mongering when it comes to assessing the moral responsibility of agents for actions that have a moral character due to the agents’ implicit attitudes. Instead, I suggest, we should construct our account of moral responsibility, in whatever way we judge appropriate, and then pretty mechanically apply it to the cases think hand. We should just churn the cases through the machinery. That’s a departure from the standard methodology, because normally philosophers engage in some sort of process of seeking a reflective equilibrium between their theories and their case-by-case intuitions. If instead we approach the cases mechanically, as I’m suggesting, than we accept the results, no matter how counterintuitive.Here’s an illustration of what I mean. Suppose you accept a control-based account of moral responsibility. In that case, you ask whether agents possess sufficient control over actions partially caused by implicit attitudes to be morally responsible for them. Fairly obviously, implicit attitudes block what I will call personal-level control, which is deliberate and deliberative. An agent can’t exercise this kind of control over the fact that their actions has a moral character due to an implicit attitude when they lack awareness either of the fact that they have such an attitude, or of the fact that their attitude influences their action (in a certain way). Agents often confabulate what look to be sound reasons for their actions when they are caused by their implicit attitudes: falsely taking your action to be caused by certain states of affairs prevents the agent from satisfying the epistemic conditions on personal-level control.

But there are good reasons for thinking that agents don’t need to exercise intentional control over states of affairs to count as morally responsible for them. Agents lack personal-level control over many essential aspects of their skilled actions (where an aspect is essential if without it the action would not count as exemplifying the skill. Hence the fact that improvising musicians can be surprised by what they play, and a great wit can be surprised by their own jokes. But there’s no temptation to say that they are not responsible for these actions. Control is better understood – in the kind of manner famously urged by Fischer and Ravizza (among others) – as a kind of sensitivity of behavior to reason-giving states of affairs. Agents need not be aware that, or how, they’re exercising control for it to be the case that that’s just what they’re doing.

As Fischer and Ravizza emphasise, control must be appropriately patterned, or systematic. A witty remark is controlled because it is systematically responsive to conversational demands – the mood of the conversation, the details of the content, and so on. Similarly, a great tennis player modulates her behavior in response to fine-grained features of the surface, of her opponent, the weather, and so on – responsiveness to reason-giving facts of which she may not be (at the personal level) aware. This kind of patterned sensitivity can be understood as appropriate content-driven transitions. Belief-like states are, of course, the paradigm of states that figure in such kinds of transitions. And we have seen that patchy endorsements fall short of such transitions. But they might nevertheless feature in enough such transitions to count as realizing control. To discover whether they do, we have to look at the data: is the patchiness of patchy endorsements too extensive for the agent to count as exercising control?

I think what we see in the data is a systematic failure of mechanisms featuring implicit attitudes to be sensitive to broad classes of reason: a patchiness of response which suggests that sensitivity to reasons fall well below any plausible threshold for patterned reasons-responsiveness. Moreover, I think the failures of sensitivity actually explains the moral character of the resulting actions.

Consider, for instance, an agent who chooses a worse qualified male over a better qualified female because his implicit attitudes bias his selection processes. If he is morally responsible for the fact that the choice is sexist (and moral responsibility requires control), he had better be sensitive to the considerations that make it the case that his choice has that character. If in fact insensitivity to these considerations explains the moral character of his action, he seems to lack the degree of control required for moral responsibility. So if moral responsibility requires control over the character of the acts for which we are morally responsible, implicit attitudes routinely block moral responsibility for actions with a moral character due to these attitudes.

There’s a lot more to say, of course; I’ve tried to say it elsewhere. Moreover, the remarks here are addressed only to control-based accounts (that’s why this is an illustration). These accounts remain the dominant ones, but accounts that belong to the broad family of those Angela Smith calls “updated versions… of ‘real self'” accounts are increasingly influential.  In the paper a draft of which I link to in this paragraph (a revised version is forthcoming in PPR), I work through these accounts too. For reasons of space, I won’t summarise the argument here. Notice, though, that if I’m right an important motivation for favouring accounts in this family, rather control-based accounts, fails. That motivation is that control-based accounts don’t yield the intuitively correct result in these cases: they don’t hold agents morally responsible for actions caused by these states. I have suggested that these intuitions are likely to be off-track, in this context, and therefore don’t have any epistemic value when it comes to adjudicating theory choice.


One Comment

  1. Pingback: Blaming Buried Prejudice: Neil Levy on implicit bias and moral responsibility | The Partially Examined Life Philosophy Podcast | A Philosophy Podcast and Blog

Comments are closed.