Rationalization, Belief, and Inference

When you were a kid, did your room ever get really messy? Of course it did. Did its messiness bother you? Quite possibly not. Lots of kids are not at all bothered by their messy rooms. (Question: Does it even look messy to them, or does it look orderly? We may revisit this question in a subsequent post). But parents often take offense at their kids’ messy rooms. They issue stern directives (e.g., “Clean up your room!”). Do the kids say: “My goodness, you’re right. What a mess! I’ll clean it up right away.” ? No. The typical response is some kind of resistance.

The kid’s situation is ripe for rationalization. They may well offer a reason as to why it really doesn’t make sense for them to clean up their room. “I can’t clean up my room, because… [my <knee, foot, arm, shoulder, nose, etc> hurts, I have to practice violin, I’ve got to finish my homework, etc.] But these reasons do not reveal the real reasons they have for the refusing to clean up their room, and the kid knows it.

The messy room example illustrates rationalizations offered for an action. In the rest of this post I’d like to raise a few questions about rationalizations of belief, and their relationship to inference.

So let’s switch to an example about belief where there’s a disconnect between the factors that actually explain why a person has a belief, and the reasons they give for the belief. First I’ll give you the example. Then I’ll raise the questions.

To make things spicy, I’ve drawn my example from an experiment about discrimination in hiring. We didn’t need experiments to tell us that people sometimes don’t reveal the real reasons for their beliefs when you ask them. But it’s often important to figure out what the factors leading to specific beliefs are.

Example

My example is adapted from a 2005 study by the psychologists Uhlmann and Cohen, where they were investigating the role of gender stereotypes in hiring decisions. (It’s called “Constructing Criteria: Redefining Merit to Justify Discrimination”. Psychological Science 16/6, 2005.

https://www.socialjudgments.com/docs/Uhlmann%20and%20Cohen%202005.pdf  One of their studies is about hiring a police chief. (Another one is about hiring a Womens’ Studies professor. But I’ll focus on the police chief).

I’m going to oversimplify things a bit in what follows, and I won’t explain the entire experimental set-up. The gist is that you’ve got to hire a police chief. The candidates have two types of qualifications: streetwise qualifications (such experience policing crime-ridden neighborhoods, and administrative experience), and formal education. Participants in the study have to rate the strength of each candidate on each set of qualifications (experience vs education), they have to rate the importance of each kind of qualification, and they have to evaluate whether each candidate would make a good police chief.

Uhlmann and Cohen were looking for the psychological mechanisms of discrimination that would explain why Michael (a male candidate) was consistently rated more qualified than Michelle (a female candidate with exactly the same qualifications). The experiment includes some candidates have strong streetwise qualifications, but little formal education, and other candidates with strong formal education, but little policing experience. When evaluating these candidates, participants had to decide how important the relative strength of each characteristic was.

The experimenters were testing the hypothesis that ratings of which qualification is more important are unstable, varying with whether the candidate is male or female. And that is what they found. When Michelle (a female candidate) has more education than experience, then experience matters more for succeeding as a police chief. When Michael has more education than experience, education matters more. You can guess the rest. When Michelle has more experience than education, education matters more for succeeding as police chief, but education matters less for male candidates.

Where’s the rationalization in this picture? Apparently, people reach the verdict that Michael would make a good police chief and should be hired. They don’t reach either verdict about Michelle. <Oversimplification alert – but that’s the gist.>. What do they think makes Michael a strong candidate? Going by the ratings in the experiment, the participants’ official reasons are these: he’s got more education than experience, and education counts more than experience for succeeding as police chief.

But these don’t seem to be their real reasons. Q: Why isn’t their real reason that: Michael has more education than experience, and education is more important than experience for being a good police chief? – A: In the analogous “Michelle” condition (where Michelle is just like this Michael, and has more education than experience), participants seem to say that experience counts more than education. What’s pulling the strings in their verdicts about who would be a good police chief does not seem to be their ratings about which qualifications matter more, because those ratings are not stable. It seems to be stereotype. Their ratings about relative importance of education an experience are pushed around by the stereotype. And since their verdicts on who should be hired go with those ratings, their verdicts seem to be pushed around by the stereotype as well.

(Side note: you may have heard about the CV studies, in which the same CV is sent to (real-life) prospective employers, sometimes with male names and other times with female names (Dovidio 2000), and male candidates got called back more often. A natural explanation of this result (and the usual explanation, so far as I can tell) is that the same qualifications are perceived as counting for more in male candidates than female ones. So you might wonder that uneven weighing of the same qualifications happens in this study too. But it doesn’t. Participants count Michael as equally qualified as Michelle when it comes to education, when their CVs are exactly the same. The effect that Uhlmann and Cohen find seems to stem from the shift in relative importance of education vs. experience).

Questions

Here are three questions about belief and inference raised by this sort of example.

Q1. What do the participants believe about whether education matters more than experience for being a good police chief?

Q2. Assuming that they reach the conclusion that Michael would be a good police chief by inferring it from the information they get from his file, what are the components of their inference?

Q3. If they don’t reach the conclusion that Michael would be a good police chief by inference, how do they reach it?

On Q1: What do the participants believe about whether education matters more than experience for being a good police chief?

This is the kind of thing that people have beliefs about. And those beliefs seem on the face of it like beliefs that we can find out about by asking people. But when U&C ask people in the experiment, they do not get a straight answer. Some options:

A1: We have no idea from the experiment, because a privileged setting of reflection determines belief. There is a privileged setting of reflection. If we got the participants to reflect on what they think in that setting, then they could say honestly what they believe about whether education is more important than experience for being a good police chief. Because the experimental set-up is so unlike the privileged setting, participants can’t access their own antecedent beliefs and can’t adjust them or form new ones about this topic. And for the same reason, experimenters can’t elicit accurate reports of what participants believe, except by accident (if the rating happens to coincide with what the participants would generate in a privileged setting of reflection that determines what they believe). So we have no idea from the experiment what participants believe about the relative importance of education vs. experience for being a police chief.

A2: They believe what they say, because the shifty, stereotype-influenced settings determines belief. Whatever the participants say they believe about the relative importance of education vs experience in the experiment is what they believe. What they believe about this matter is sensitive to other things they are thinking and other information they have – such as whether they’re evaluating candidate Michael or candidate Michelle for police chief, and whether that candidate has more education than experience, or vice-versa. When the stereotype ‘pushes around’ their beliefs, it does so through reasoning: the participants reason unconsciously that male candidates tend to be more qualified, infer that whichever qualification Michael has more of (experience of education), that qualification matters more.

A3: They have multiple sets of beliefs about the same topic, sometimes contradictory and other times duplicative. The participants have two beliefs: one tied to the local circumstance of their reasoning, and the other tied to other circumstances (either a ‘privileged setting’ or something else). For example, suppose S believes in the privileged setting (or would believe on reflection) that experience is more important than education. In some local circumstances (e.g., where Michael has more experience than education, or where Michelle has education than experience), she has this belief again. She has the same twice over! In other cases, her beliefs in other local circumstances are at odds with her belief in the privileged setting (what she would believe on reflection), and then she has contradictory beliefs. E.g., when S believes in the privileged setting that experience is more important than education, but Michael has more education than experience, she has contradictory beliefs. (The case of contradictory beliefs has some resonance with what Eric Schwitzgebel calls ‘in-between believing’).

I think each of these proposals is problematic, but also each is also somewhat plausible in its own way. I’ll save further discussion for the comment thread or a subsequent post. Please comment on any of the options (or on anything else) if you’re interested.

On Q2: Assuming that they reach the conclusion that Michael would be a good police chief by inferring it from the information they get from his file, what are the components of their inference?

If you’re a participant in this experiment, you go through what seems like a paradigmatic case of reasoning. You consider how well qualified each applicant is, how important the two kinds of qualifications are, and you make a judgment about who should be hired. It would be natural to expect this process to include some explicit reasoning leading to the ultimate verdict about who should be hired.

It’s a bit different in the messy room case. The kid who declares her refusal to clean her room may not be under any illusion about her reasons for not cleaning up. (If the negotiation or discussion goes on too long, she might talk herself into them though!) Another disanalogy: the kid is (not very successfully) hiding the real reasons from the parents. If the participants is hiding real reasons from anyone, their hiding them from themselves.

Assuming that the participants are drawing an inference to the conclusion that Michael should be hired, what are the inputs to this inference?

A flat-footed answer to this question is that their inference is something like this:

P1. Michael has a good background in education.

P2. Education is more important than experience in being a good police officer.

[ancillary premises – Michael has no disqualifying features, and other background features that are strong]

Conclusion: Michael would make a good police chief.

Inference is often characterized as “forming acceptances for reasons of other acceptances” (that formulation is from Crispin Wright in a recent paper on inference in Philosophical Studies, responding to a paper by Paul Boghossian in which he says something similar about what inferences are). Let’s focus initially on belief, rather than ‘acceptances’ in general (‘acceptance’ is a somewhat elusive category). On this standard picture, if you form a conclusion-belief by inference, then at a minimum, your inference from some inputs (such as the beliefs in the premises) explains why you form the conclusion.

Does a belief in P2 explain why the participants draw the conclusion that Michael would make a good police chief? If participants don’t believe P2, then the answer is No.

Perhaps there is some other attitude toward P2 besides belief that is an input to this inference. For instance, maybe they are accepting P2 for the sake of getting through the experiment, but on reflection they wouldn’t endorse it. Let’s call any old attitude toward P2 (belief or otherwise) that they respond to when they reach the conclusion a “P2-attitude”.

But even if the participant has a P2-attitude, that attitude seems incidental to their reaching the conclusion. Like the kid in the messy room, the participants in the experiment do not seem to be voicing the real reasons for their verdict. If Michael had a lot of streetwise experience, then their ratings of importance wouldn’t match P2. They’d rate experience as more important than education. Because it is incidental to the conclusion, it does not seem to explain why they reach the conclusion. It’s just along for the ride.

If the participants are not reaching their conclusion via inference from P1 and P2, what other inferences might they be drawing? Here are some ideas.

A1: Inference from the stereotype. Participants infer directly from “Michael is male” and “Men make good police chiefs” to “Michael would make a good police chief”. If this is their inference, then despite all the trappings of consulting the application, no inferring from P1 and P2 is going on at all.

A2: Two inferences (re-basing). Participants initially make the inference in A1. Then they ‘re-base’ their belief that Michael would make a good police chief on P1 and P2.

A3: No inference: They reach the conclusion without making any inferences at all.

This is long enough already so I’ll save discussion of the merits and drawbacks of these options for later. Option A3 raises our third question: if participants do not reach their conclusion by inference, then how do they reach it?

[Image credit: By i ♥ happy!! from NY, NY (Flickr) [CC-BY-2.0 (https://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons]

31 Comments

  1. Hi Susanna, thanks so much for this great post! I have about a billion questions to ask you, but let me start with this one: Why do you conclude so quickly that the participants’ ratings of the relative importance of different qualifications are at least *among* the “real reasons” for their judgments? I agree with you that these ratings are unstable and influenced by biases, but couldn’t it be that *given* their biases, the participants arrive (in some way or another; this is the concern of your questions below) at these ratings, and once they’ve arrived at these ratings they are efficacious in producing or sustaining their verdicts about whom to hire? (Of course it’s true that their stereotypes are *also* efficacious, but on this analysis that efficacy consists partly in the way the effect the ratings of which qualifications are important.) Maybe you don’t mean to rule this out, but you seem to be doing so in the passage I’ve just tagged. I would love to hear more about it.

    • PS. Okay, so replying to myself in order to clarify my proposal, which was ambiguous in a crucial respect.

      I suppose it should be obvious that an attitude concerning the relative importance of different qualifications can’t have been what *originally produced* the hiring decisions, but rather was something the participants came up with post hoc. Still, it could be among the *sustaining* causes of this attitude: once the participants are pressed for a justification, they come up with this, and it’s partly because they believe (or accept, or tell themselves that they believe, or whatever) this claim about relative importance that they retain their belief about whom to hire.

  2. Hi Susanna,

    Maybe I’m jumping the gun, but I sort of thought we had developed a better understanding of how people make decisions, as a result of the work by Kahneman and Tversky and others. People hardly ever make decisions by inferring. They make decisions by listing the factors in favor of each alternative, assigning a weight to each factor, and balancing the weights. We don’t have cognitive insight into that mechanism, so when we are called upon to explain our decisions, the explanations we construct are often spurious.

    I don’t mean to dismiss the question of why people falsely attribute beliefs to themselves. That’s a very important question and I’m happy to see you working on it. But it doesn’t seem likely that an approach that relies on an invalid model of the decision-making process is going to get very far.

    Best regards, Bill Skaggs

    • Bill: it sounds like you are suggesting A3, then, no? A3 seems consistent with a variety of Tversky and Kahneman’s (1974) heuristics and biases.

      Aside: are heuristics and biases necessarily non-inferential? It is uncontroversial that they are rarely consciously accessible, but even associative thinking can be modeled in a way that involves inferences, no?

    • Susanna Siegel

      hi all, thanks for the comments.

      john: it sounds like your option is answer A2 to Question 1.. they believe the comparative rating, even antecedently to their verdict, by inferring it from the stereotype. e.g., men make better police chiefs, michael is male, michael has more experience than education, so education is more important than experience.

      on sustaining vs. explaining the formation of the ultimate hiring verdict that Michael would make a good police chief: on answer A2 to question 1, the comparative ratings can explain this verdict.

      on merely sustaining the ultimate hiring verdict, even if it doesn’t explain it: right, it seems very likely that any beliefs the evaluators might have about comparative importance of qualifications play a sustaining role. perhaps there is even a second inference, as per answer A2 to question 2.

      i am interested in this case because on the one surface, it has all the features of inference as epistemologists construe it. from the subject’s point of view, they are reaching a conclusion through reasoning, and they appreciate the support that the premises give to the conclusion. once we know more of the psychological backstory, where if anywhere do we locate the inferences they are making? -the post is exploring answers to that question.

      bill skaggs suggests that i’m focused on why people make false self-ascriptions of beliefs. i wasn’t trying to address that question. if they self-ascribe beliefs that correspond to the comparative ratings, then it’s an open question whether those self-ascriptions are false. they might be true, as per A2 to Question 1.

      my main focus wasn’t on self-ascriptions. it is on the relationship between the comparative ratings, the participants beliefs (Q1) , and the route by which they get to their verdict.

      bill also doubts that the participants are making inferences at all, because t&k have shown that people don’t reason using inferences at all. that seems a bit quick! when people conclude that linda is more likely to be a feminist bank teller than a bank teller, they are drawing an inference from the information they’re given in the story. –i’d be interested to hear if this is what nick byrd meant in his aside when he asked whether associative thinking could be modeled in a way that involves inferences.

      • Hi Susanna,

        Thanks for this really helpful reply. I am inclined to be ambiguous between A1, A2, and A3 as answers to Question 1 — that is, I think we would need to know more of the details in the case to determine whether or not the subjects really believe what they say about the relative importance of different qualifications, and whether this belief (if they have it) is stable and precludes conflicting beliefs, etc. There is, I think, lots of room for insincerity and self-deception here, and for attitudes that are extremely unstable.

        You are right, though, that I think something like A2 is plausibly the correct answer to Question 2 — following your colleague Matt Boyle, I think it’s a mistake to see inference just as something that happens at one time and then produces a belief which counts as held for the reasons from which it was inferred; instead, what it is to believe P for the reason that Q concerns the rational state of the subject *now*, not something he or she did in the past.

        However, to the extent that A2 presupposes the story in A1, i.e. that the original belief is formed by inference from the beliefs that Michael is a man and men make better police chiefs, I am more skeptical. This is because I think the attitude about men being better police chiefs is probably an alief rather than a belief, and its role in the cognitive economy that of an eason rather than a reason (oh, how I love this terminology). The same might be true of the attitudes that lead to the re-basing belief (or acceptance, or pretend belief, or whatever). This is a tricky matter, though, and I’m not confident enough about what inferences are to say whether this commits me to saying that these processes are non-inferential. But maybe you can clear that up for me!

      • RE: associative thinking & inferences

        Susanna. Yes, that is what I had in mind. It is not clear that heuristics and biased can could not be, at bottom, rule-based. And if this is possible, then it seems that is is also possible that heuristics and biases could also be inferential.

        • Hey, now. (I had better tread carefully here, since Nick is my student.) There are lots of rule-based processes that aren’t inferential, at least in the sense that interests epistemologists — e.g., the way the brain forms complex perceptual representations on the basis of sensory stimuli.

          • Ok. I see a mistake in what I said. I shouldn’t commit to there being some connection between the possibility that heuristics are rule-based and the possibility that they are inferential. So I am not *inferring* one from the other — sorry; couldn’t help it. Rather, they might be taken independent claims. Perhaps, for the purposes of this discussion, we can ignore my claim that heuristics can be rule-based. What matters, as I see it, is that heuristics can be inferential.

  3. @ Nick: You shouldn’t apologize. (I knew I should tread carefully.) Certainly it’s plausible that being-rule based is necessary for being inferential. And maybe that’s all you were saying. Certainly it’s an important question.

    • Susanna Siegel

      Thanks, Nick and John. Many people think inferential processes have to be rule-based. In many transitions that seem paradigmatically inferential, I think it’s obscure what the rule would be. Induction is one example. Judgments involving evaluative concepts such as ‘kindness’ is another.

      Take the case of kindness. You overhear the conversation between the post office clerk and the next person in line, and come away thinking that the post office clerk is kind. You couldn’t pinpoint all the cues you are responding to – the expression on her face, her tone of voice, her friendly and forthright manner, etc. And you couldn’t articulate the generalization that links these cues (which you can’t articulate) to kindness. But it might still be an inference. Now, there should be some modal robustness – in relevantly similar circumstances, you’d draw a similar conclusion. Without modal robustness, we won’t have much of an explanation. But a transition can be modally robust without it being a case of a person following a rule.

      Here’s another complication. Your criteria for kindness might shift across cases, depending on all sorts of factors. Maybe a cheerful mood lowers the bar, so it takes fewer cues in the family to trigger the application of the concept and make you come away believing that the clerk is kind. You can picture other sorts of factors that could raise or lower the bar for kindness.

      It seems natural to me to call the transition you make in this case an inference from various perceptual cues, despite the fact that it’s shifty across contexts, and even though you couldn’t identify the rule you’re using. (To be fair to the rule-following accounts of inference, Paul Boghossian thinks of inference as a form of rule-following, but he agrees that it can happen without the subject being able to identify the rule. However he seems to think they have to identify the premises-states in the inference).

      “Inference” is one of those terms that can label a range of different phenomena. The phenomenon I”m aiming to capture are epistemically evaluable transitions that bear on the subject’s rationality. I don’t think inferences are always usefully modeled in premise-conclusion form. So Nick might have a narrower phenomenon in mind when he says that they have to be rule-based.

      • These comments are very helpful. Thanks Susanna!

        I think your suggestion is right: I was using ‘inference’ in a narrower sense than you are using it.

        For what it’s worth, however, I do not think that for an inference to be rule-based, an inferer must be consciously aware (in any capacity, before, during, or after the inference) of any rule-based or rule-like processes. In the end, I am comfortable with all cognition being, at bottom, rule-based even if turns out that none of our thoughts seem rule-based from the first-person perspective. My intuition about this might explain fundamental differences between my views and others’ views.

        Anyway, I wonder what people think about positing of a sort of paradigm inference to explain the Police Chief case. That is, something like the following:

        P1. S’s imaginary paradigm of a Police Chief is male-ish.
        P2. S takes Michael to be more male-ish than Michelle.
        P3. S takes Michael to be more like S’s imaginary paradigm of a Police Chief than Michelle.
        Pn. (…)
        (…)
        C1. Michael, not Michelle, should be the Police Chief.

        I agree with Susanna that inferences are not always best modeled in premise format. I would add that I find it unlikely that much, if any, human cognition follows the exact logical form that philosophers use. Still, I think the premise form above, while premised and while incomplete, gives the gist of what I have in mind. And, more importantly, I take it that the paradigm inference could be modeled in others ways (e.g., via the factor analysis that Bill eludes to).

        Also, it seems that this kind of inference captures some of what Kahneman and Tversky (1974) were trying to capture with their heuristics. As I have cast it, the paradigm inference about the Fire Chief case might turn out to be an Availability heuristic. It might also involve anchoring.

        So, because (i) the paradigm inference is like examples from the history of cognitive science, (ii) the paradigm inference could be modeled in ways that are familiar/acceptable to cognitive scientists, and (iii) the details of the paradigm inference need not be consciously accessible, the paradigm inference might satisfy Bill’s worries. Bill? Others?

        (apologies for typographical errors, here and elsewhere)

        • One more thing: the paradigm inference would (iv) also make sense of why participants’ judgments do not, ultimately, turn on consciously accessible evidence like the Michael’s or Michelle’s application materials — note this is consistent with the possibility that the application materials factor into participants’ judgment about who to hire, albeit in some minor way.

          • Aaand last question for while, promise. To help keep track of my comments, I’ll refer to this one as (v):

            It is not clear to me that participants are actually answering the question, “Which candidate would make the best Police Chief?” It might be that, because participants are facing conditions of uncertainty, they answer a different question, the answer to which isn’t uncertain (e.g., “Which candidate is most like your imaginary paradigm of a Police Chief?”). Then, after they answer this new question, they try to resolve any dissonance between their answer and the application materials (i.e., they confabulate reasons involving claims about education and experience). I doubt that I am the first to propose something like this, but I wonder what people think about this hypothesis.

            (apologies for using “Fire Chief” in place of “Police Chief” in other comments)

  4. Zoe Jenkin

    Hi Susanna,

    I’m wondering about how you think the epistemology falls out on the different options you suggest for the psychological story about how participants arrive at the belief that Michael should be hired for the job of police chief.

    On the A1 option for question 2 (stereotype inference), there’s an easy answer if the premise that men make good police chiefs (or that men make better police chiefs than women) is not justified. If the premises of the inference are not justified, the conclusion won’t be either.

    Things get more complicated with the A2 option, though. If we take it that the initial inference does not lead to a justified belief, the 2nd inference could still lead to one. If we imagine that the subject in the experiment has some independent support for the belief that education is more important than experience (or whichever qualification Michael has more of), then her re-based inference could be a good one (I know in the actual case subjects needn’t have this belief to arrive at the conclusion). So we have a case in which a subject has two different routes to the same belief, one of which is an epistemically good one, and one of which is an epistemically poor one.

    Another factor is that the epistemically poor route to the belief somehow prompted the subject to generate the epistemically good one (perhaps because she doesn’t consciously endorse the stereotype belief that is operative in the first one), so there is a tight and asymmetric connection between the two basings of the belief.

    What do you think we should make of the epistemic status of beliefs in cases like this? Does one of the basings take precedence? I think this issue probably generalizes to many different sorts of cases in which we have multiple sources of support for a belief, so it seems like a pretty important one. Looking forward to hearing what you think!

    • I can’t speak for Susanna, but I’d love to take a stab at this.

      It seems to me that while it’s true that there are many different cases of this sort — i.e., ones where an individual first forms a belief on the basis of evidence E1, then holds it later on based on some other evidence E2 — our intuitions about this one are driven by a lot of factors that are specific to the scenario, including the very quick time course of the (possible) re-basing, the fact that the new evidence doesn’t seem to have any independent support, the fact that the subjects seem motivated to appeal to this evidence out of something other than a sincere concern for truth, and the hunch that there may be some kind of self-deception at work (e.g. subjects may not really believe the claim they appeal to as their justification, or may not really hold their preference on this basis, etc.). I suspect that if you told a similar story where some or all of these factors were missing, we’d be more inclined to see the re-basing as potentially rational, and the re-based belief as justified.

      How does that sound to you, Zoe?

  5. Susanna Siegel

    Thanks Zoe and John,

    I think we can distinguish two kinds of re-basing cases. In one kind of case, the two basings are independent ,and neither one controls the belief more than the other. (I’ll explain what I mean by ‘control’ below). Since well-foundedness of belief can come in increments, in principle it could be made worse by the first basing, and better by the second, and the ultimate status would depend on where the two directions end up when they’re aggregated.

    The example from U&C is not like this. If the subject lost the belief that education is more important than experience (or whatever comparison is congruent with their other information about Michael), would they give up their ultimate verdict about Michael? If so, then that belief (I’m assuming for now that it *is* a belief) is controlling the verdict.

    In effect, the experiment tells what the ultimate verdict about Michael is controlled by and what it isn’t controlled by. The ultimate verdict does not seem to be controlled by: education + (education > experience). Instead it is controlled by the factors in the first inference.

    How does control relate to well-foundedness of belief? If we stick with the one-dimensional picture of well-foundedness, then to get the result that the first basing is the one that determines how well-founded or ill-founded the verdict is, we’d need a bridge principle linking control to well-foundedness in a way that trumps any other factor. And then if other factors pulled in a different direction (e.g., if the non-controlling inference had well-founded premises – e.g. if they had independent grounds for education > experience), we’d have to wheel in the bridge principle to say why the verdict is nonetheless ill-founded.

    I don’t think such a strong bridge principle is needed, because well-foundedness has many dimensions to it, and control is one of them. Even if a subject has independent grounds for (education > experience), in the example, their ultimate verdict about Michel isn’t controlled by this comparative belief. It’s controlled by the stereotype. Part of what the basing relation is supposed to be is the relation that explains why the subject has a belief. A subject might offer reasons – even good reasons for a belief, without those reasons fully controlling the belief.

    • Zoe Jenkin

      Thanks for the replies, John and Susanna. The explanation of the role of control in basing helped make the issues really clear to me. I think what Susanna says is compatible with John’s suggestion, because we could have other re-basing cases in which premises in the new basis are both well-founded and come to control the belief. Perhaps this sort of case would have an asymmetry between the two basings in that one triggered the other, but once the subject formulated those premise beliefs in the second basis, they would continue to be held independently of whether we removed the premise beliefs that were operative in the first inference, and so come to be the ones that control the conclusion. This doesn’t seem to be at all what’s going on in the original hiring case, but it’s helpful (at least for me) to use as a point of comparison for what we’d need a re-basing case to look like in order for it to take over the epistemic burden and potentially give the conclusion belief a good new basis.

  6. Neil Levy

    I want to return to John’s question: why not think that subjects’ judgments are driven by beliefs about which qualifications fit someone to be police chief. Subjects come into the set up with few pre-existing beliefs about these questions: they have never given much thought to the relative weighting of various relevant qualifications. And these are genuinely relevant qualifications: it is plausible to think that education matters more than street wisdom (it’s police chief , not a beat officer; you need to liase with city hall, develop policy, etc…). And it is also plausible to think that being streetwise matters more than education (it’s police chief; you need to have policing in your bones). The fact that each is plausible sets the stage for the following process, only the last step of which is inferential: due to the stereotype associating being an effective cop with being male – or even due to the mere association between being a cop and being male, driven by nothing more than a history of exposure to exemplars – the subject is disposed to prefer a male applicant to a female one. But that disposition can’t drive the judgment on its own, because the subject is engaged in effortful reasoning. What happens is that this bias affects the weight the subject gives to the qualifications. Then the subject engages in some honest-to-dog reasoning, inferring from the qualifications to the answer to the question. Subjects were also asked to rate how confident they were in the objectivity of their judgment. Given that they had engaged in faultless inference from plausible premises, they should be reasonably confident (and they were). You can imagine a sincere feminist among them, thinking “gosh I’d really like to say that the woman is the better candidate. Maybe I would if things were close. But what can I do: clearly the man has the more relevant qualification”.

    • Susanna Siegel

      hi neil, thanks a lot. i take the point that if the inference was only from the stereotype (assuming that there’s an inference at all), then there’d be no role for any beliefs about the qualifications. and it seems that beliefs about those are partly controlling the verdict. that makes them explanatory rather than defensive reasons (in the terminology of my second post). if michael didn’t have good education or experience, the participants wouldn’t reach the same verdict – he’d be ranked lower than qualified michael. (i think it’s safe to assume that!) so beliefs that Michael is qualified seem to be explanatory, not defensive.

      that observation raises the question whether we can reconstruct an inferential route to the verdict out of exclusively explanatory, non-defensive factors, where those factors include beliefs about qualifications, but exclude beliefs about their comparative importance (since the comparative beliefs are the defensive ones).

      i’m not sure it is possible to do that. we could list some premises that the participants believe. but the transition from them would be regulated by the bias. whether a subject makes the transition from ‘X has a qualification for police chief’ to ‘X would make a good police chief’ shifts with gender in the same way that the comparative judgments do. no matter how we slice it, the transition has a shifty element. in my initial flat-footed reconstruction, the shifty element was a belief in comparative ratings. in your reconstruction, the shifty element is a transition from non-comparative beliefs to the verdict.

      if we think of the inference in your way, then we need a more fine-grained framework for describing the inference than simply explanatory reasons + control by those reasons of the verdict. contrast two inferences from the same beliefs to the same verdict (X would make a good chief) that differ in what regulates the transition from them. in one case, the transition is regulated by bias. in the other case, it’s regulated by something that yields more consistency across candidates. intuitively, the biased inference is epistemically bad, whereas the consistent on can be epistemically okay.

      the fact that there could be such a pair of inferences shows that the epistemic status of an inference depends on more than just responding to reasons that control a verdict. we need a more fine-grained framework for describing the kind of control of verdict by explanatory reasons, so that we can distinguish the way in which ‘M has a relevant qualification” controls the verdict about Michelle, from the way in which ‘M has a relevant qualification’ controls the verdict about Michael.

      thanks again neil
      -susanna

  7. Susanna Siegel

    hi nick, thanks. your proposal about paradigms strikes me as a way of unpacking answer A1 to Question 2. It’s a way of unpacking the generic ‘men make good police chiefs’ or ‘police chiefs are male-ish’.

    i think Boghossian would agree with you that inferrers don’t need to be aware of the rule they are using. what i think he has to exclude from his account of rule-based inference is the Kindness case, because the premise-states aren’t easily identifiable as such (i.e., as having that role) to the subject.

    i’m interested in your reasons for thinking that (human) inference is rarely if ever deductive. it seems straightforward to fill out the steps of the inference that you identified in a way that would make it valid, without being much of a stretch. all the action is in the content of the premises. what form do you think the route to belief would take, if it wasn’t a deductive inference? if you feel like it, i’d be interested to hear how you thinking the Availability heuristic works, if it doesn’t work by regulating the inferences we make.

    i agree with you that human inference is rarely explicit. but it could be deductive without being explicit.

    on your suggestion that the participants aren’t even answering the question ‘Who would make a good police chief?’: interesting. they are filling out a questionnaire that asks them this very question (or near enough – the study is here:https://www.socialjudgments.com/docs/Uhlmann%20and%20Cohen%202005.pdf ). Superficially, they are answering this question. were you thinking that superficial appearances come apart from deep psychological facts in this case, in that it is still an open question for the participants who would make a good police chief? Is the idea that people only count as having answers to questions that they’re certain about?

    • Susanna: Thank you for clarifying Boghossian and for being so generous to reply to my comments. Also, thank you for identifying how the paradigm inference fits into the helpful structure of possibilities you have provided. It has been a pleasure to read your post and your comments.

      RE: Inference as Deductive
      I should have been clearer. When I wrote, “I find it unlikely that much, if any, human cognition follows the exact logical form that philosophers use” I was thinking what is usually happening cognitively during reasoning, is not the representation of premises and representations of inference relations between premises such that further representations, inferences, emerge — the rare exception being, perhaps, when philosophers try to impose logical structure on their thoughts. I am not sure why I think this. I certainly do not have a particular view of representation or cognitive architecture that rules this out. So perhaps this is just an unreflective intuition.

      And I agree that inference could, in principle, be deductive without being explicitly deductive, but — if I am honest — I think I had a stronger claim in mind. My intuition was that even implicit inferences are rarely structured the way ideal arguments are. Maybe I have just been biased by the popularity of cognitive models that do not seem, on their face, to be structured the way analytic philosophers’ structure arguments (e.g., factor analysis or [insert fancy Bayesian model here]). I am not sure I could give a satisfying defense of this intuition. If I was to try to formalize my it, it might look something like this.

      P1. If (a) humans’ implicit inferences were not uncommonly deductive, then (b) humans could successfully deploy deductive inference, knowingly or unknowingly, as needed (even when they are not asked to appeal to deductive reasoning on a particular task).

      While humans are proficient at deduction in some cases (e.g., simply hypothetical syllogisms), they aren’t in others — e.g., we systematically stink at The Wason Card Selection task (Wason 1977, Evans 1993, Griggs Newstead and Evans 1995, Green Over and Pine 1997) and, anecdotally, undergraduates find the truth table for the conditional operator uniquely counterintuitive. So…

      P2. not (b)
      C1. not (a).

      Now that I’ve had to formalize this, I am beginning to see some obstacles in defending P1. For example, the following counterexample: an inferrer might fail to employ deductive inference in a certain context simply because the inferrer didn’t realize — implicitly or explicitly — that deductive inference would be a viable strategy in their present context. In a recent talk, Larry Jacoby mentioned a poster suggesting that this failure to realize the applicability of a known strategy is a compelling explanation of certain reasoning/learning failures (alas, I have been unable to find the citation). Since some studies control for participants’ deductive competence when analyzing Wason Card Selection Task performance, one might wonder if something like this failure-to-realize-the-right-strategy explanation could significantly predict some of the remaining variance in performance.

      And perhaps P2 is also too quick.

      Currently, I think it would be prudent for me to just admit that I lack a satisfying defense of my intuition.

      RE: Availability
      To be honest, I have not discerned a single sense of ‘availability heuristic’ from my limited exposure to the literature. I am unsure whether this is the result of my poor discernment capacities or some ambiguity in the literature. I wouldn’t be surprised if it were the former.

      Anyway, I think that availability can play a sort of regulatory role, as you seem to suggest. So, when a participant needs to consider some set of Xs, availability might explain why a subject considers only a subset of Xs — say, because only a subset of Xs came to mind at the time of consideration. Similarly — and this might be, at bottom, the same thing — when someone is considering what the essential features of an X are, availability might explain why non-essential features are taken to be essential (e.g., all the instances of Police Chief that quickly came to mind happened to be [human + adult + male-ish], so male-ish was erroneously taken to be an essential feature of Police Chiefs).

      If we suppose judgments are regulated in this way by Availability, then it seems natural to suppose that further judgments could be constrained by an anchoring effect. For instance, once the essence of Police Chief is taken include male-ishness, candidates that are, more or less, male-ish will be more likely to fit the essence of Police Chiefs than candidates that are, more or less, female-ish, so male-ish candidates will be given some kind of privilege in subsequent deliberation.

      I am not sure if this is the kind of details you were wanting me to provide about how I think Availability works.

      RE: The Question Being Answered
      To answer your first question: yes.

      However, I lack a defense other than the speculative and meager one I offered in my previous comment.

      To answer your second question: yes.

      But maybe ‘certain’ is too narrow. Perhaps I should use ‘settled’ instead. In this case, someone settles on an answer when a certain level of confidence in one’s conceived answer — ranging from, say, ‘highly plausible’ to ‘certain’ — is obtained. And perhaps this level of confidence is determined by a network involving the ease with which an answer emerged, the awareness of conflicts between multiple answers, the awareness of cues implying that the task is difficult, etc. I take Thompson and colleagues’ work to provide some reason to think that this appeal to confidence in determining how we settle on answers is viable (Alter, Oppenheimer, and Epley 2013; Prowse and Thompson 2009; Shynkaruk and Thompson 2006; Thompson, Ackerman, Sidi, Ball, Pennycook, and Prowse 2013; Thompson and Johnson 2014; Thompson and Morsanyi 2012; Thompson, Prowse, and Pennycook 2011; Thompson, Turner, Pennycook, Ball, Brack, Ophir, Ackerman 2013). If I were to run with this, then I would say that maybe the answers we settle on are (often) a function of various factors that result in obtaining a certain level of confidence about a conceived answer. So — and this where I jump into the deep end of the speculation pool — perhaps our inability to settle on an answer to one question prompts consideration of similar questions until an answer emerges that we are properly confident about (or until we lower our confidence threshold).

      All that being said, my present speculation is probably better aimed at forced choice closed questions (i.e., yes/no questions) rather than the Likert scaled questions used in the study. So perhaps my speculation isn’t as useful in explaining results like those from Uhlmann and Cohen 2005.

      • Susanna Siegel

        hi nick, thanks for the detailed reply.

        i agree with you that P1 seems dubious. we could make deductive inferences all the time but still perform badly in some contexts. you mentioned jacoby’s idea that we might not recognize that deductive inference could be used. for instance this could happen in Frege cases, where we are unknowingly thinking of the same fact or object under different modes of presentation.

        Availability and Anchoring could regulate which inferences we make, both by highlighting some information as information that are suitable for being the basis of an inference, and also by triggering us to stop collecting information and draw a conclusion. Similarly, a preference (even if it’s shifty) for simple answers or for reaching a conclusion could play the same role of regulating when inferences are made. E.g. Mayesless and Kruglanski. None of these factors are alternatives to inference in general or deductive inference in particular. To get alternatives to the idea that we use those processes, bridge principles are needed.

        Thesis: Anchoring, availability, and the work on the factors that manipulate our feelings of confidence can have all the effects we know that they have, by regulating inferences.

        Stronger thesis: They can have all those effects, by regulating deductive inference.

        I’m not sure whether to believe the stronger thesis. It seems unlikely to be true. But the Thesis about inference in general seems like a useful working hypothesis.

        On whether people answer a questions only when they feel confidence int he answer: if there’s a generalization in the vicinity, I doubt it extends to experimental contexts. Those are a bit like the classic cases of knowing without knowing you know (K without KK, in the lingo!). Think of (Gareth) Evans’s example of the kid who gives the answer on the history test and feels as if she’s guessing but she really knows the answer.

        I don’t think the difference between binary and non-binrary questions matters. Likert scales are used to measure increments of confidence. (E.g, a scale from “strongly agree” – “strongly disagree”), so I agree that those are poor candidates for illustrating answers without confidence in the answers. But non-binary questions (like when was the Battle of Hastings) could have answers without confidence. And you might have to reach an answer for pragmatic reasons (e.g. you ran out of time) in a search context.

        thanks again!

        -Susanna

        • Thanks Susanna. Once again, your responses and your citations are very helpful. Specifically, I appreciated that you highlighted additional regulatory roles of Availability and Anchoring and introduced me to Mayesless and Kruglanski.

          Aside: I am interested in looking further into the viability of the Stronger Thesis.

          Henceforth, I’ll try to comment only on the subsequent posts.

          Thanks again!

  8. Susanna Siegel

    hi john, regarding your comment that ‘what it is to believe P for the reason Q concerns the rational state of the subject now, not something she did in the past’: if the participants ever believe the stereotype that men make better police chiefs, then they believe it, even at the time that they reach the verdict that Michael would make a good police chief. so why do considerations about what the subject did the past vs. their current rational state speak to what the components of the inference are?

    Did you mean either of these things: (i) if someone believes P for the reason that Q, then Q must be something they would recognize as a reason? (ii) if someone believes P, there must be some Q such that they would recognize Q as a reason?

    Initially I interpreted your comment about current vs earlier rational states to be recommending either (i) or (ii), but on reflection i thought i might have been reading more into the comment than you meant. If it doesn’t mean anything in the vicinity of (i) or (ii), i’m not seeing why it recommends incorporating the comparative judgment (education > experience) into the inference that leads the participants to the belief that Michael would make a good police chief.

    • Hi Susanna, I would endorse something along the lines of (i), but not (ii), as I think there can be beliefs that are held “for no particular reason” (as Anscombe says of some intentional actions), even if belief in general is the kind of thing that has a reason as its ground.

      What I had in mind with the remark you quote is that even if the process (inferential or otherwise) that led the subjects to the original preference didn’t involve an attitude about the relative importance of different qualifications (since this attitude arose only post hoc), still such an attitude, once it has been formed, could be among the sustaining grounds of the preference. The “inference” here might not be anything occurrent, and certainly wouldn’t be a mental process that has the preference as its outcome (here is where I am thinking of Boyle’s paper), but I don’t think there is anything especially unusual in that.

  9. Hi Linda, apologies for the delayed response. I didn’t mean to say that people don’t make inferences or that they don’t influence our actions. If I come home at an unusual time and find my wife acting nervous and my best friend’s hat on the nightstand, my actions are likely to be influenced by inferences. What I meant to say is that people don’t make decisions by inferring which action is correct. People frequently explain their behavior in those terms, but the actual mechanism of decision-making is clearly quite different.

    Best regards, Bill

    • Susanna Siegel

      hi bill,
      my post focused on routes to belief. you are focusing on how people make decisions about what to do. the inferences you are talking about are inferences about which action to complete and how they explain their behavior afterward. that’s a very interesting topic but it is not the topic i am discussing. my topic is the formation of the verdict in the U&C case and others like it.

      tversky and kahneman’s account of heuristics help us predict which decisions will be made – and therein lies its brilliance. but as i’m sure you know, they say next to nothing about the actual mechanism by which decisions are made. it is left open whether the mechanism by which they operate is inferential or not. my post is about the mechanisms leading to the verdict, in situations like the U&C case, and whether those mechanisms are inferential.

      -susanna

  10. I just wanted to say that I’m *really* intrigued by the suggestion that bias could affect inference by influencing the modes of presentation under which relevant individuals, etc. are apprehended! Susanna, has this idea been explored in the literature?

    • Susanna Siegel

      hi john, on frege cases: i was thinking that bias might make you fail to see someone as a potentially good police chief, even though she has a lot of the characteristics that you believe would make a good police chief. so there’d be an inference someone could make that goes roughly like this:

      Good police chiefs have characteristics F
      Michelle has F
      Conclusion: Michelle would make a good police chief

      …but bias prevents you from drawing the conclusion, because it prevents you from thinking of Michelle under the mode of presentation “good police chief”.

      -susanna

  11. I’m late here so much of what I would have said has been said already. But I would like to add a couple of points.

    First, as a possible answer to Q3 I would expect it would require some emotional reaction to the names themselves (rather than an inference of gender from name) – and to test that it would have been interesting to test a variety of names like Dirk and Lauren vs Betty and Olga. But even then I don’t really see how we can be free from inference. “Sounds tough” is only a qualification for police chief by inference from an assumption that police chiefs need to be tough. So I guess I’m still struggling with that one.

    With regard to Q2, I think a common chain of inference might be: A police chief is a kind of protector. I feel safer with a guy as protector (and/or I expect the community as a whole to feel safer with a guy in that role). So it should be the guy.
    And then:
    From the sample (admittedly trivially small but, the only one available) it appears that guys are stronger in area A, so I infer that area A is a better predictor of guyness and so of qualification for police chief.

    This kind of answers Q1 since the answer is whichever is most associated with being a guy.

    (In practice, of course, any hiring committee should be required to specify the assessment process – including assignment of weights to the factors – before seeing the resumes.)

Comments are closed.

Back to Top