Most philosophical discussions of mindreading stay squarely within the realm of philosophy of psychology. Theorizing about mindreading plays a role in debates about the modularity of the mind, the representational theory of mind, language development, the semantics of ordinary language use, etc. Using mindreading as a case study for understanding the mind and our capacity for language makes sense. There is a wealth of philosophical and empirical work on the topic; it is an excellent example of fruitful interdisciplinary interaction and collaboration; mindreading is ubiquitous in our everyday lives, and the capacity for mature mindreading seems to be a distinctively human trait. Thus, we can see why the study of mindreading would have natural applications in other areas of philosophy of psychology.
One would think that understanding how we interpret and interact with others is important for other philosophical areas, as well. Various topics in epistemology and ethics, for example, concern how we know what others believe, our judgments that others are
knowledgeable and competent with respect to some issue, whether we regard someone as an epistemic inferior, peer, or superior, and all the moral judgments entangled with these inferences. These inferences clearly involve mindreading, so it is initially puzzling that the mindreading literature has had so little to say about these epistemic and ethical topics.
Judgments about others’ knowledge and competence play a key role in debates about peer disagreement and epistemic injustice. In this short space, I will focus just on peer disagreement and refer readers interested in epistemic injustice to chapter 6 of my book. The epistemology of peer disagreement concerns what we ought to do when someone we take to be an epistemic peer disagrees with us. Two people are epistemic peers with respect to X when they possess the same evidence about X and are equally intelligent, free from bias, competent at perceiving, reasoning, etc. (Kelly 2011). The steadfast view (Kelly 2011) and conciliatory view (Christensen 2007), as they are typically debated, hinge on whether it is legitimate to use one’s own reasoning about a topic in evaluating a peer’s reasoning about that topic (Christensen 2011). That is, when an epistemic peer disagrees with you, can you use the fact that she disagrees with you as evidence that she is less likely to be correct? The steadfast view says yes, you can, and that tips the scales in favor of your own judgment. The conciliatory view says that you should not, and without that resource we must conclude that your epistemic peer has just as much reason to think she is correct. In that case, it is rational to reduce confidence in your judgment. That is an interesting debate worth having, but it seems to me that an essential prior question is how do we decide who is an epistemic peer? Implicit in this literature is the notion that typically we are astute judges of who is an epistemic inferior, epistemic peer, and epistemic superior.
We tend to form both self-enhancing and other-downgrading biases especially for out-group members who disagree with us. With respect to self-enhancing biases, we tend to take credit for our success and deny responsibility for our failure (Miller and Ross 1975). Those who are unknowledgeable or incompetent with respect to some knowledge or skill egregiously overestimate their own knowledge and competence, and fail to recognize others’ equal or superior knowledge and competence (Kruger and Dunning 1999). The deficiency of the comparatively ignorant and incompetent is invisible to them presumably because recognizing their deficiency requires the very competency they lack. Furthermore, we tend to regard others as more susceptible to bias and misperception than ourselves (Pronin, Lin, and Ross 2002, Pronin 2007). We think we simply see things as they are, but others suffer from bias. We regard ourselves and others in our relevant in-groups as perceiving the world as it truly is, but we regard those who disagree with us as misguided, misinterpreting, or biased by their personal motivations. This bias is particularly relevant to the epistemology of peer disagreement. The empirical data suggest that simply in virtue of the fact that someone disagrees with us, we will downgrade that person’s epistemic status in relation to our own (Pronin, Lin, and Ross 2002, pp. 378-379, Kennedy and Pronin 2008).
I described many of our other down-grading biases in a previous post. Perceiving a social interaction involves categorizing individuals into a salient social group, which is associated with various features, stereotypes, and social biases. These associations influence how we decide who is an epistemic peer, inferior, or superior before we even evaluate their evidence base or reasoning abilities. Simply in virtue of being part of a particular social category we may upgrade or downgrade a person’s knowledge or competence in a certain domain. Furthermore, we usually have more favorable attitudes toward and empathize more with in-group members, especially people who share our gender, race, age, religion, or nationality than toward people do not share these features. The phenomenon of in-group favoritism suggests that we are generally less likely to regard out-group members as epistemic peers or superiors.
Given our tendency toward self-enhancing and other-downgrading biases, it seems that the rational thing to do is to conciliate and reduce confidence in our own judgments rather than remain steadfast. After all, the psychological data suggests that if we have judged someone to be epistemically comparable to us about X (especially if they disagree with us about X), they are likely to be at leastand very possibly more knowledgeable and competent with respect to X. Indeed, if we disagree with someone we take to be an epistemic peer who is a member of a subordinate out-group, she is likely to be at least moderately epistemically superior. In both of these cases, it seems like the safest, most reasonable response to what we take to be peer disagreement is reducing confidence in our own judgments. We should even take pause when we disagree with someone we take to be a moderately inferior epistemic agent from a subordinate out-group because it is likely that we have underestimated her knowledge and skills. Thus, heeding the advice above for more intellectual humility pushes us toward conciliation in many cases of disagreement (Hazlett 2012).
Taking a step back from debates about peer disagreement, the more general point I aim to establish in chapter 6 of my book is that mindreading – particularly the broader conception of mindreading I articulate in the book – is relevant to many debates in epistemology and ethics broadly construed. Though in some cases epistemologists and ethicists acknowledge the importance of mindreading to these debates (e.g., in moral psychology, the epistemology of testimony, and experimental epistemology (Gerken, forthcoming)), mindreading theorists almost never do. And there is a lot of interesting, important work that could be done at the intersection of these debates if only mindreading theorists, epistemologists, and ethicists participated in the discussions. What I hope this post, this chapter, and the book more broadly show is that mindreading is much more psychologically and epistemically complex and ethically complicated than much of the previous literature reveals.
Christensen, D. 2007. “Epistemology of disagreement: The good news.” The Philosophical Review:187-217.
Christensen, D. 2011. “Disagreement, question-begging, and epistemic self-criticism.” Philosophers’ Imprint 11 (6):1-22.
Gerken, M. forthcoming. Review of Shannon Spaulding’s “How We Understand Others: Philosophy and Social Cognition. Mind.
Hazlett, A. 2012. “Higher-order epistemic attitudes and intellectual humility.” Episteme 9 (3):205-223.
Kelly, T. 2011. “Peer disagreement and higher order evidence.” In Social epistemology: Essential readings, edited by Alvin Goldman and Dennis Whitcomb, 183-217. Oxford: Oxford University Press.
Kennedy, Kathleen A, and Emily Pronin. 2008. “When disagreement gets ugly: Perceptions of bias and the escalation of conflict.” Personality and Social Psychology Bulletin 34 (6):833-848.
Kruger, J., and D. Dunning. 1999. “Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments.” Journal of Personality and Social Psychology 77 (6):1121.
Miller, D. T., and M. Ross. 1975. “Self-serving biases in the attribution of causality: Fact or fiction?” Psychological Bulletin 82 (2):213.
Pronin, E., D. Y. Lin, and L. Ross. 2002. “The bias blind spot: Perceptions of bias in self versus others.” Personality and Social Psychology Bulletin 28 (3):369-381.
Pronin, Emily. 2007. “Perception and misperception of bias in human judgment.” Trends in cognitive sciences 11 (1):37-43.