Implicit attitudes and moral responsibility

Let me begin by thanking John Schwenkler for the invitation to blog here at Brains. I’m not really a specialist in philosophy of mind (or in, well, anything) but a lot of my recent work is concerned with mind-y (and brain-y) issues. I find it hard to get excited about a topic unless it is has some sort of payoff for thinking about how we ought to live, so a lot of my work is at the intersection of normative issues in ethics and philosophy of mind. In that vein, and because John mentioned some recent work of mine on implicit attitudes, I will devote this post and the next to the question whether agents are (directly) morally responsible for actions caused by their implicit attitudes.*

Appeals to intuitions play a big role in debates over moral responsibility –  think of the enormous literature on Frankfurt-style cases (to which I’ve contributed, in a small way). While I think this methodology has its – limited – uses, I am sceptical it should be used at all when it comes to assessing the responsibility of agents whose actions are partially caused by their implicit attitudes (the set of cases I’m concerned with here are those cases in which the action/ omission would have had a different moral character were it not for the agent’s implicit attitudes). This methodology is common- I think it’s fair to see this recent post by Eric Schwitzgebel as an example. Here’s why I doubt its value.

If our intuitions are designed to track states and processes that are different from the states and processes that caused the action, then we ought to expect our intuitions to be off target in these cases. Now, our intuitions are designed to track those processes that make a systematic and predictable difference to behavior. Folk psychology is right to this extent: it’s beliefs and desires that make that kind of difference. But implicit attitudes are representational states that are not beliefs (and which don’t make a systematic and predictable difference – predictable by folk psychological lights – to behavior).

I have argued that implicit attitudes are not beliefs, by working through some of the data. As Eric Mandelbaum has argued, some of the data can’t be explained by the (common) view that implicit attitudes are mere associations. Rather, they underwrite inference-like informational transitions and transformations. Consider the phenomenon of celebrity contagion. People are willing to pay more for an item of clothing if it has been worn by a celebrity to whom they have positive attitudes. But if the item has been laundered after the celebrity wore it, the value is reduced. ‘Celebrity’ can be washed off. It is hard to explain this result associatively.  People have positive associations with laundry, not negative. To explain the result, we have to suppose that the attitudes involved have been transformed in a way that looks like some kind of nonconscious inference. Mandelbaum has a number of further examples where, again, the results seem hard to explain associatively and instead seem to cry out for an inferential explanation.

Mandelbaum’s case that implicit attitudes are not mere associations is convincing. But to qualify as beliefs, they need to exhibit inferential promiscuity (as Stich puts it). They must underwrite inference of the systematic and general way we associate with beliefs. Of course, we shouldn’t idealise the beliefs of ordinary people: some departure from genuine inferential promiscuity is no doubt the normal state of affairs. But too great a departure from inferential promiscuity (or, in Schwitzgebel’s own terms, too great a departure from the dispositional stereotype ) and it’ll be clear that we are dealing with some other kind of beast.

That’s just what we see, I think. Just as Mandelbaum maintains, implicit attitudes are not mere associations, because they are pervasively involved in content-driven transitions. But too many of these transitions fall far too short of inferential promiscuity for it to be plausible that implicit attitudes are beliefs. For instance, processes that feature implicit attitudes are pervasively blind to negation. One of Mandelbaum’s own central examples illustrates this: in Rozin’s ‘poison’ experiments, participants were bothered by the label ‘not poison’ precisely because the relevant processes fail to process the ‘not’. That’s a lot of the dispositional stereotype to go missing all at once. And there’s more: evidence of content-driven transitions that cause representational update in ways that are inferential but perverse, failures of update, and updates that don’t look inferential at all but merely associative.

All of this evidence, I think, is good reason to conclude that implicit attitudes are not beliefs. Nor are they any other state that figures in our folk psychology. Rather they are what I call (following a suggestion of Susanna Siegel’s) patchy endorsements: representational states, but with a patchy structure, insofar as they feature in only some kinds of inferences (a subset of those in which beliefs feature plus a set all of their own) and update only in the light of certain kinds of evidence (some of which should not drive update at all).

If implicit attitudes are patchy endorsements, and patchy endorsements neither figure in folk psychology nor are the kinds of states to which the mechanisms that generate intuitions are designed to respond, then we ought to expect our intuitions to be off track when we consider cases in which agents perform morally significant actions which owe their moral character to their implicit attitudes. I suspect that we tend to substitute bona fide – though nonconscious – beliefs for implicit attitudes when we consider such cases. So our intuitions will be systematically unreliable. That, in turn, entails that we had better approach the question of the moral responsibility of such agents in some other way, avoiding, to the extent possible, either consulting our intuitions or having our theories ‘contaminated’ by them. In the next post, I will turn to how I think we might best proceed.

  • I apologize for the inevitable typos and thinkos in this post. I have just flown from the UK to Australia. Jetlag and lack of sleep are not highly conducive to philosophy. But I didn’t want to delay this first post any longer.

15 Comments

  1. Thanks for the post, Neil. Thought provoking!

    I think I may have missed something at the end so maybe you can fill in the gaps for me? Your argument against the use of intuitions as a reliable source of evidence in philosophical discourse goes something like this:

    1. Implicit attitudes are patchy endorsements
    2. Patchy endorsements neither figure in folk psychology nor are the kinds of states to which the mechanisms that generate intuitions are designed to respond.
    3. We ought to expect our intuitions to be off track when we consider cases in which agents perform morally significant actions which owe their moral character to their implicit attitudes.
    4. I suspect that we tend to substitute bona fide – though nonconscious – beliefs for implicit attitudes when we consider such cases.
    5. So our intuitions will be systematically unreliable.

    If this is right (I tried to cut and paste as much as possible), then I am curious to hear how you would further support these premises. Because for each of them a bunch of questions come up for me, some of which you may have spoken to and I just missed (likely).

    To 1: What’s a patchy endorsement? It sounds reflective. If reflective, that is, if it’s something that comes to us and we sustain it in some way after thinking about it’s occurence, then it seems we *can* be morally responsible given that THAT endorsement could be a result of an exercise of libertarian free will.

    To 2: what are the mechanisms that generate intuitions? Past experience? What differentiates them from mere memories?

    To 3: what’s the necessary connection you are drawing between intuitions in moral cases to implicit attitudes? Is the assumption that such intuitions are the result of implicit biases? Why think that intuitions owe their character to implicit biases in such situations? If anything, I would have thought that non moral situations (or situations *perceived* of as non moral by the acting agent) are much more ripe for implicit bias to creep in to colour our intuitions. For instance, even a racist who doesn’t like brown-skinned folks would still seem to have the intuition to save an innocent brown-skinned baby even if their implicit attitude caused them to be repulsed or whatever it is that racists feel when they are in a position to help another that they dislike for no good reason. Ya, so this is getting a bit long, I’ll wrap up in a sec.

    To 4: Why suspect this? Why not think that a bona-fide belief that an innocent person is what does the work in moral cases where harm to innocents seems apparent.

    To 5: It seems taht all of our conclusions are the result of some bias or other including this one. I am curious to see where you go with this, but I suppose I’ll have to wait for the next post.

    In any event, thanks again Neil. Sorry for the lengthy comment. You just kick ass and compell me to write incoherent questions, so rather than apologize I’ll just blame you! 🙂

  2. Hi Neil. I find that it always helps me to make things concrete, so let me examine what looks like the key passage here:

    “. . . we ought to expect our intuitions to be off track when we consider cases in which agents perform morally significant actions which owe their moral character to their implicit attitudes. I suspect that we tend to substitute bona fide – though nonconscious – beliefs for implicit attitudes when we consider such cases.”

    I don’t understand what sort of beast a nonconscious belief might be, but let’s go on. For a case in which an agent performs a morally significant action that owes its moral character to implicit attitudes, let’s consider trampling on the flag of your country.

    I’m willing to accept that most people, when asked to explain their aversion to flag-trampling, will invent a spurious explanation (though it would be nice to see some evidence). But could you clarify the sort of explanation that might be generated by substituting a “bona-fide nonconscious belief” for an implicit attitude?

    Best regards, Bill

  3. Neil Levy

    Thanks for the comments. I will respond to Bill first, because he might be right that it helps to make things concrete. I don’t find the example an easy one to offer an explanation for, in most cases because I would be very surprised if people’s implicit attitudes toward their own country’s flag weren’t usually very positive. A second reason that I don’t find the case compelling is that I find it hard to see much moral significance in the action – just as someone like Jon Haidt would predict. Mentioning Haidt provides an opportunity to cite his data on cleaning the toilet with a flag. That arouses moral disapproval in a subset of participants (low SES, for instance). Though I can’t recall off the top of my head whether people offered confabulated rationales for opposition in that case, they often do: they cite harms as explaining their opposition, even when the cases are designed so that the harms cited are avoided.

    Implicit attitudes don’t cause behaviour by themselves. Maybe there are cases in which someone is so exhausted or under so much load that they can be the primary cause, but usually their effect is to modulate behaviour which is caused by bona fide beliefs. So imagine the person is faced with a situation in which they must walk over the flag or over an expensive rug. He’s wearing heavy and soiled work boots and has to cross one or the other because he’s engaged in some urgent task. He decides he will work across the flag rather than the rug, because he thinks that given he will have to soil on or the other, he should choose the shortest route. But that’s a confabulation: had the positions of the rug and the flag been reversed, he would have walked over the flag anyway, reasoning (say) that the mud will wash out more easily from the flag than the rug. This kind of confabulation of reasons has been demonstrated multiple times in hiring choice studies, where people choose (say) male candidates over female and offer the qualifications of the male as the reason. We know its a confabulation because when genders are reversed, with qualifications held constant, participants still choose the male and offer his qualifications as the reason for the choice (i.e., the qualifications which were not sufficient for a female candidate uniquely qualify a male).

    I don’t need to imagine that people will explain this kind of behaviour by postulating a nonconscious belief. The philosophical literature is full of people explaining these kinds of cases by citing such a belief. “Whether he knows it or not, he must believe that his country is contempt-worthty”. But the evidence suggests that when this kind of behaviour is caused by an implicit attitude, it is not caused by a state with the dispositional profile of a belief; i.e., anything approaching inferential promiscuity. It might just be an association. So in the job examples, the role might be one that is stereotypically male, and that is sufficient to bias conscious assessment of the qualifications of the candidates. Since a male just seems more fitting than a female in the role, the person tends to raise evidential standards when it comes to assessing the qualifications of the female. In the flag case, we have to postulate an unusual kind of learning history (which is why I find the case a difficult one). As I said, there are possible cases. I was raised in Apartheid era South Africa, and I might have had negative implicit attitudes toward the old flag (I fear I didn’t; I hope I did). I might consciously reason as follows: I don’t want to disrespect the feelings of those who hold the flag in high regard. But given that I have to walk over the flag or the expensive rug, I should….

    This has been a long comment; I’ll reply to Justin in a separate one.

  4. Neil Levy

    Thanks for the comment, Justin. Your reconstruction of my argument is excellent. I will reply to your queries in order.

    1. I wonder whether the phrase ‘patchy endorsement’ has misled you. Implicit attitudes are ‘endorsements’ because they have assertoric content, rather than because the person has endorsed them. That said, I was careful to limit my discussion to direct moral responsibility. People can and do discover the content of the implicit attitudes (everyone should do an IAT). When they do, they might acquire an obligation to do something about their IAs; failure to do anything might ground moral responsibility. As a matter of fact, I am skeptical that there’s a lot we can do to eliminate unwanted IAs, but we can make progress in that direction.

    You can do an IAT at project implicit – https://implicit.harvard.edu/implicit/takeatest.html

    2. Intuitions have a variety of sources. I suspect that they’re not a natural kind at all. In the paper I linked to, I argued that central intuitions, when it comes to blaming others, are innate (in some sense of that word: we have them because they played an adaptive function). But I think that intuitions can be acculturated too. I understand an intuition as an intellectual seeming. Presented with a case, it seems to us that the person who features in it did wrong. We can’t always be retrieving the seeming from memory (though we certainly do that too) because the case may be novel. Rather, it simply strikes us in that way. I am attracted to a Mikhail/Hauser story of unconscious action parsing as generating the intuition, though it works slightly differently when it comes to assessing blameworthiness rather than wrongness. Assessing wrongness is the preliminary stage; we then look for evidence that the agent believed and/or desired to produce that wrong outcome.

    3. While there are interesting questions about the relationship between IAs and intuitions (perhaps – some – intuitions are also patchy endorsements; perhaps they bias conscious judgment in the same way as IAs) I am not assuming that here. My contention is rather that our intuitions about blameworthiness are generated by systems designed to track beliefs and desires as they are expressed in action (and that they do this quite well), but that they misfire when the action owes its character to an implicit attitude. Belief/desire psychology works pretty well, as a predictive and explanatory framework. But when there is room for confabulation and IAs are relevant (when is there room for confabulation? Roughly, when the reasons at stake are sufficiently close in weight that a biasing in one direction or another may be decisive), IAs may offer the contrastive explanation, but the mechanisms will respond as though beliefs and desires were responsible for the contrastive fact.

    I’m not sure what you’re getting at with 4. As I said in response to Bill, there are plenty of cases in which philosophers have offered a belief (or a desire) as an explanation of behavior in these cases. I guess I would row back a bit from the claim that people tend to offer a belief-based explanation, though – it would be better to say that they will offer a folk psychological explanation, citing beliefs and desires.

    5: bias is pervasive, of course. But we’re not condemned to the limitations of our biases. The literature on motivated reasoning is more dispiriting here than the literature on IAs, because motivated reasoning seems more powerful. It can make a difference of quite extraordinary strength, whereas IAs more tip the balance one way or the other. I think we can get beyond our biases by reasoning together. I don’t mean that in a trippy-hippy way: I have adversarial rather than cooperative reasoning in mind. I’m sure you see the limitations of my claims much better than I do.

  5. Hi Neil,

    Great post! Very quick question: have you looked at Greenwald and colleagues’ (2002) “Unified Theory” of implicit attitudes? (https://faculty.washington.edu/agg/pdf/UnifiedTheory.2002.pdf) I only learned about it recently. The paper argues that implicit attitudes demonstrate all kinds of consistency patterns (a la the celebrity contagion example). But Greenwald and colleagues offer an associative explanation for those patterns. Their view provides an interesting counterpoint to the claim that consistency/balance effects are evidence against associative structure.

    -Michael.

    • Neil Levy

      Hi Michael,
      Great post yourself, over at FP. I agree entirely. Its a testable hypothesis, notice – if participants were to see their participation as giving them an opportunity to display how unbiased they are, we should expect licensing effects on subsequent behaviour.

      I have read the Greenwald, et al. paper, though not for a long time. I am sceptical that the account they sketch can explain the full array of the data cited by Mandelbaum and by De Houwer. For instance, how does it explain the flips in valence seen in cognitive balance experiments, where being told that members of groups to which the participant has negative implicit attitudes dislike members of a new group induces positives implicit attitudes in participants to the new group?

      • Thanks Neil. RE: the W&C study, yes, I totally agree that my (and others’) speculation about social desireability demands is testable, and that subsequent licensing effects might be a good way to do it.

        RE: “Unified theory” – I’m not sure either if it can account for all the data, in particular the kinds of flips in valence you and Eric discuss. As you point out in your patchy endorsements paper, though, it’s an open question when those sorts of changes represent processes over IAs themselves.

        In any case, I’m not concerned to defend Unified Theory per se. And, also, I think the title is a little misleading, as it’s not a theory of IAs so much as a demonstration of how balance effects might work in implicit cognition. A more complete account, I think, would need to say more about how affective and semantic associations are co-activating in these contexts.

  6. Alan White

    Hi Neil–

    Of course I’m completely out of what few comfort zones I have, but I must admit that mulling over your post and the comments have actually interrupted some sleep! Thanks for this and pre-apology for any stupidity on my part.

    To get complex things like this, I usually have to simplify. Please let me know if the following is way off track.

    I see IAs as analogous to hidden reflexes. Say that you go to a doctor, and when the doc taps your knee, you unexpectedly kick very hard, so hard you clock the unwary doctor in the head, knocking the doctor unconscious! You didn’t know you had such powerful reflexes! You at first deny that you have this unusal feature, but you own experiments at home with a friend only results in profuse apology and an emergency room trip that you have to pony up the co-pay for. You have emipircal evidence that there is a disposition inside you that, under certain circumstances, can injure people.

    So what do you do? You can’t remake or rewire your neurology. What you can do is use the new knowledge about your inner trait to avoid reflex-testing circumstances: “Doc, please don’t stand in front of my leg when you use your cute little hammer!” With your empirically-founded verifications of your reflex, I’d think you would be responsible for not warning those endangered by your reflex under conditions that it might manifest its considerable wounding power.

    This scenario doesn’t involve underlying states that have, as you say, assortoric content, though it still may have under the described circumstances axiological relevance. (“You won’t like me when my knee-reflex is ‘angry’ “, or stimulated–maybe the reaction to stimulated but axiologically-relevant non-conscious powers should be termed “Hulkish dispositions”!)

    At any rate, your posts here reflect your usual deep care in thought and careful expression and its attendant epistemic humility.

    • Neil Levy

      Hi Alan,

      I am very sympathetic to the idea that that’s (at least) one of the best ways to respond to implicit attitudes. There is the famous work on auditioning applicants for symphony orchestras behind screens, so that the committee is blind to gender. Once this adopted, many more women are hired. That’s an example of screening off the kinds of information that can be expected to trigger implicit bias in people. We should do that, to the extent possible. We should not attempt to deal with the problem through self-awareness or attempting to compensate for the biases; there is evidence that just makes things worse.

      We can ‘rewire’ (to use your term) our responses to some extent. Its controversial how much we can do. In the short term, given forewarning and uncluttered time, we can do quite a bit. For instance, you can engage in counter-stereotypical priming (think of women you admire, for instance). In the longer term, changing your environment can produce longer-lasting changes. For instance, attending classes with predominantly female professors has a long-lasting effect on implicit attitudes (interestingly, attending classes with ‘star’ professors doesn’t seem to help – the Martha Nussbaums or Judith Butlers of this world get recategorised as not representative of the category ‘woman’, and therefore aren’t processed in the kind of way that reduces the implicit bias).

      I haven’t emphasised these approaches for several reasons. One is that I’ve deliberately limited myself to direct responsibility. Another is that I think that given how our societies are set up, the chances that we can eliminate implicit biases in ourselves (rather than mitigate them) is small, and this is never something we can do on our own. We can, and should, do what we can to avoid triggering the biases, but living in sexist, racist and homophobic societies ensures that even if we are very conscientious most of us can expect to perform actions that have a moral character partially due to our implicit attitudes at least on occasion. I have no doubt that’s true of me.

  7. Phil H

    I find the argument a bit hard to follow, in part because I’m stumbling over the empirical examples.

    “‘Celebrity’ can be washed off. It is hard to explain this result associatively.”
    But it’s very easy to explain this result physically – when you wash an item of clothing, the celebrity is literally washed off. Explanations which go beyond that physical level seem a bit otiose to me.

    “in Rozin’s ‘poison’ experiments, participants were bothered by the label ‘not poison’ precisely because the relevant processes fail to process the ‘not’.”
    I think Grice offers a much easier explanation for this. People simply follow the maxim of relevance and assume that saying “not poison” means there is some reason (unknown to them) why they ought to think that the food is poisonous.

    Then I wonder about the explanatory power of “patchy endorsements”. If we’re unable to say anything in advance about which patches are endorsed and which are not, then does this concept tell us anything at all?

    If I’m understanding it correctly, the question you’re addressing in this post is whether implicit attitudes are like associations in that they do not support inferences, or are like beliefs in that they do support inferences. That seems fairly straightforward and testable to me: measure attitudes toward two things (to take a completely non-political example, let’s say snorting coke and smoking crack); then give stats implying that one is more prevalent among a certain ethnic population than another; then measure attitudes again. It seems to me that this experiment has been run on a grand scale, and the answer is in.

    • Neil Levy

      I’m not really following you, Phil. How does the availability of a physical explanation for something else (actually watching clothing washes out dirt) explain how an imaginary example is processed? We are attempting to explain how mental processes work; presumably they don’t work by tokening a representation of a dirty piece of clothing and spraying water over the representation.

      In the Rozin experiment, the participants knew that the jar labelled “not poison” contained sugar. They had watched both jars being filled from the same sugar bag, prior to the labels being attached. So they knew that there was nothing wrong with the sugar. Nevertheless they were bothered by the labels. There is plenty more such evidence (lots from Rozin’s lab). For instance, people are much less likely to eat fudge if it is shaped like a dog turd, though they know – explicitly – that it is fudge (think of it yourself).

      The example you give is, it seems to me, neutral between the associationist and the propositional accounts. Do I think that crack smokers are somehow worse because I associate negative properties with them or because I infer that they have negative properties? Note that you can’t refute my view by citing a single relation between two properties, since my view is a mixed view. You have to show that all the relations are propositional, or that they are all associationistic, to refute my view.

  8. Alan White

    Thanks Neil–so maybe we can rewire higher-order systems to compensate for inopportune non-rewirable lower order systems on an individual level, and rewire the lower-order ones on a transtemporal social level that motivates moral progress. I can easily buy into all of that.

  9. Phil H

    “How does the availability of a physical explanation…explain how an imaginary example is processed?”
    Because our imaginations are strongly conditioned by our physical experiences. When I imagine kicking a ball, the imagined process is very much like the physical process of kicking a ball. When someone imagines the nature of a piece of clothing their imagination will correspond very closely to the physical nature of clothing in the real world. And it is a fact about clothing in the real world that before washing it has a bit of the wearer on it; after washing it doesn’t.

    In the Rozin example, you made a specific language-related claim: “the relevant processes fail to process the ‘not’”. I’m just saying that there’s no basis for that. There are existing, known language rules which bring about that effect without any “failure”.

    I see your point on my example, though. It can be read as neutral.

    • Neil Levy

      Phil, my target is the nature of the processes distinctive of implicit attitudes. I think it’s very plausible that the contents of metal states are influenced by exemplars of physical processes, but there’s no evidence that the processes themselves are sensitive to these features. Keep in mind, too, that the aim is to produce an account that explains not this or that experimental datum, but a wide array of data.

      That last point is important when it comes to interpreting the Rozin experiment. My claim that implicit processes are bound to negation is not a speculative hypothesis introduced to explain a particular result. Rather, it is supported by a wide array of experimental evidence. See, for instance, Wegner 1984; Deutsch, Gawronski & Strack 2006; Hasson &Glucksberg 2006. Just to mention one piece of supporting evidence, you can’t prime with negations (“not good” primes “good”).

Comments are closed.

Back to Top