In our daily interactions with people—driving down the street, coordinating childcare, figuring out how to hide from an old girlfriend, buying a nice gift—we rely on folk psychology, our unschooled understanding of other people. These abilities are often attributed to a single mechanism often thought to be unique to the species—known as mindreading, belief reasoning, or theory of mind. But that’s just too much work for a single mechanism to do!
Pluralism in folk psychology is a rather vague position, because it amounts to saying it’s all very complicated. But I think it’s a necessary start. In my book Do Apes Read Minds?, I make the case for pluralism, but also offer a starting point for mapping the kinds of folk psychological practices we engage in and the different kinds of processes that we use to do them. The book grew out of frustration with the common claim that belief reasoning is essential to human social interaction. We see that idea in Davidson, whose view entails that to think one must have the concept of belief. It’s also attributed to Grice, though Richard Moore argues that Gricean cognitive criteria for communication are less demanding than often thought—and that apes can meet these more minimal criteria; see his excellent paper “Enacting and Understanding Communicative Intent”, part of a larger project that I hope we’ll get to see more of soon.
Whatever the reason for holding the view that thinking about others’ beliefs is central to our social interaction, there are reasons to reject it. Little kids first think about what people should do based on who they are, and what they have done (and I highlight a lot of different research programs in social and developmental psychology that defends this claim in my book). What looks like belief reasoning may not be, but to see that we have to turn away from language and linguistic analogies such as propositions and examine other cognitive strategies.
Take deception. All kinds of species mislead conspecifics and predators, and humans may also deceive without thinking about belief. The child who lies about eating the cupcake (despite frosting all over her face) may have learned that every time she does something she was told not to do she gets punished, and she doesn’t want to get punished, so she acts as though she did not eat the treat. A sneaky baboon female might hide from the dominant when mating with a subordinate male, and she might suppress her typical sex call. She doesn’t need to think about what the dominant believes, but she just needs to have formed an association between forbidden sex and a beating, conjoined with a desire to avoid the beating.
And take false belief situations. Children may be able to say that Sally will look for the ball in the box, even though Anne moved it to the cupboard and it isn’t in the box any more, because they learned a generalization that people look for objects where they left them. The cultural differences in the age at which kids pass this false belief task may reflect differences in how well this generalization is learned. In industrialized societies—east and west—our homes and daycares are full of artifacts that kids get to manipulate, and important objects are frequently misplaced. Mom looks for her keys where she left them every time she leaves the house, and even if the toddler got her hands on them, Mom still reaches for her keys on the empty hook. If passing the false belief task is related to kids’ experience and interest in others’ manipulating artifacts, this might explain why we see kids in small-scale societies only passing the false belief task years later, if at all.
Similarly, that children with older siblings tend to pass false belief tasks at a younger age may be explained by increased motivation and opportunity to see objects being placed, retrieved, hidden, moved, and so forth, rather than a greater ability to theorize about invisible mental states.
So we might not need to engage in belief reasoning in these cases, but do we? We could test the hypothesis that we are not using belief reasoning in false belief situations by examining the cultures in which it develops late to see if there is a correlation with the frequency of artifact manipulation. Developing and testing alternative hypotheses is needed to defend the belief reasoning explanation for success on the classic false belief task—in the same way that Steve Butterfill and Ian Apperly offer an alternative explanation for infants’ success on the nonverbal versions of the false belief task, and a means for testing the hypothesis (see the symposium on their paper “How to construct a minimal theory of mind” held on Brains last year).
The prima facie interpretation of passing the false belief task in terms of belief reasoning has always stuck me as rather odd. Kids who pass the false belief task still struggle with the opaque nature of belief, and how belief is responsive to reasoning. And kids don’t tend to talk about beliefs in these conditions when prompted to explain their choices; in a study I did back in grad school, we found that no 3 year-olds and only 15% of 4-5 year-olds said anything that could be interpreted as belief attribution when prompted, and more than 80% of 3 year-olds and more than 60% of 4-5 year-olds simply referred to the details of the situation (Andrews and Verbeek 1999).
We could also test the hypothesis that kids are using belief reasoning to pass the false belief task by looking at processing speeds in false belief situations, the way Ian Apperly and colleagues have done. They found that when adults watch a false belief (or true belief) scenario, they are slightly slower at responding to a probe question about the actor’s belief than they are at responding to a probe question about the location of the object (Apperly et al. 2006; Back and Apperly 2010). So maybe mindreading isn’t omnipresent in our social interactions after all.
But we do mindread…so what drives the ability?
Rather than thinking that we mindread to predict behavior, I suggest that we develop mindreading to explain anomalous behavior. When someone does something truly unexpected and out of character, it is hard to know what is going to happen next. We can’t predict what the person is going to do next based on our knowledge of their personality traits or stereotypes or social norms, because the behavior doesn’t cohere with our model of the person.
When we see anomalous behavior, and want to engage with that person further, we have to try to understand what is going on. Understanding this kind of situation requires telling a story about what is going on and then acting according to the story. This puts one in a state of explanation-seeking. And since the behavior doesn’t cohere with what we would expect, we would find value in an explanation in terms of invisible motivations or representational states.
When we see anomalous behavior, and don’t want to engage further with the person, we don’t care to explain in terms of beliefs. We are happy to call the person names, using negative trait attributions or stereotypes. This makes the drive to explain behavior a normative drive, one that evaluates as it explains. People don’t like to have reason explanations for the acts of terrorists or monsters. Explaining is so close to justifying—we often see people offering reason explanations for bad guys portrayed as offering sympathy. On September 24, 2001, the New Yorker published an essay by Susan Sontag in which she was looking for reason explanations behind the actions of the 9/11 terrorists. In response, many media outlets portrayed Sontag as no better than a terrorist, and the New Republic published an article starting with the question “What do Osama bin Laden, Saddam Hussein and Susan Sontag have in common?”
Belief reasoning is a small part of folk psychology that allows us to make sense, and then justify, the strange actions of others. By explaining your action, I am pulling you closer into my community, and by refusing to offer a reason explanation I exclude you. And if you want to be in my community, you act so that you make sense to me, and when you act outside my expectations, you want to have your behavior explained in terms of reasons. A weakening of community cohesion occurs when someone acts outside the norm, and belief reasoning is there to shore up that weak spot.
My emphasis on the normativity of explanation in folk psychology aligns me with the views of Tad Zawidzki in his book Mindshaping, though I reject his claim that only language users are folk psychologists [note: understood narrowly as mindreaders]. The normativity also aligns me with Victoria McGeer, who writes, “our folk-psychological competence consists in our aptitude for making ourselves understandable to one another, as much as on our aptitude for understanding one another. And we do this by making (self and other) regulative use of the norms that govern appropriate attributions of a range of psychological states” (McGeer 2007, 148).
So humans can explain behavior to keep the group together. But can other creatures? Explaining without language will be the topic of my next post.
Andrews, K., & Verbeek, P. (1999). Is theory of mind in young children associated with peer interaction? Presented at the International Society for Human Ethology, Vancouver, BC.
Apperly, I. A., Riggs, K. J., Simpson, A., Chiavarino, C., & Samson, D. (2006). Is Belief Reasoning Automatic? Psychological Science, 17, 841.
Back, E., & Apperly, I. A. (2010). Two sources of evidence on the non-automaticity of true and false belief ascription. Cognition, 115(1), 54–70.
McGeer, V. (2007). The regulative dimension of folk psychology. In D. D. Hutto & M. Ratcliffe (Eds.), Folk Psychology Re-Assessed (pp. 137–156). Dordrecht, The Netherlands: Springer.
Hey Kristin! Very eloquently put. As you know, and acknowledge in the post, we agree on a lot. I have a couple of questions about this though. First, in my book, I acknowledge that sometimes propositional attitude attribution is used not just to make sense of anomalous behavior in order to repair a potential rift in community membership, but also to justify the denigration of anomalous behavior. That’s how I interpret the Knobe effect, where a counter-normative side effect is interpreted as intentionally caused, while a normative one isn’t. So would you agree that PA attribution has broader social functions than just the Brunerian mitigation of threats to status triggered by apparent anomalies?
With regard to your difference with me concerning whether or not only language users are folk psychologists – I don’t think that, when FP is construed pluralistically! Of course lots of nonhumans, and human infants, engage in many folk psychological strategies, including behavioral generalization, reasoning from social norms, and the teleological stance, etc. I *do* claim that *full-blown* propositional attitude attribution, which I think presupposes an understanding of the holistically constrained connection between PAs and behavior/observable situational factors, opacity, and the behavioral appearance/mental reality distinction, can be useful only in the context of discursive communities. Specifically, the capacity to attribute such mental states seems tailor-made for accommodating anomalies in language use, as when one expresses a discursive commitment to which one fails to live up. I wonder if you could provide some more detail about how you think such PA attribution could help in nonlinguistic contexts. But perhaps that’s the point of your next post…
Hey Tad, thanks for taking the bait! And yes, I certainly do think that explaining behaviors in terms of reasons is widely useful for individuals and societies. It can be used to exculpate, as you emphasize in your book. In my book I emphasize how offering reason explanations for behaviors would be beneficial to communities in that it would promote the spread of technological innovations. An innovator acts strangely, and if strange behaviour is denigrated, then a potentially useful innovation–like cooking meat, making tools, building shelters, new ways of food processing–would be lost. When the innovative anomaly is adopted by the community, then there is the opportunity for cumulative culture–the kind of culture we see in human cultures, in which generations build on the innovations of generations past.
On your second point–yeah–I’m so bad to have used “folk psychology” when talking about your view! I can correct the post so as not to mislead anyone who doesn’t get to the comments. In my next post I will argue that language isn’t needed for one of the exculpating cases you discuss in your book, so stay tuned for that. Communication that lacks all the formal features of language you state can do a lot of work, I think.
No need to correct it! I think it’s pretty clear. I just thought it kind of ironic that someone who’s spent so much time defending pluralism about FP (all of which I agree with) painted with such a broad brush stroke!
Look forward to your next post on non-linguistic communication and social functions of FP explanations!
Hello Kristin,
Above you state, ” If passing the false belief task is related to kids’ experience and interest in others’ manipulating artifacts, this might explain why we see kids in small-scale societies only passing the false belief task years later, if at all.”
What evidence did you have in mind here concerning kids in small-scale societies never passing false belief tasks?
Hi Robert,
I’m no expert on the cross cultural literature, but the “if at all” is in deference to the claims of some anthropologists. First, there is the paper by Lillard in which she claims there is no belief reasoning in some cultures. But also, Vinden 1999 reports a failure to find FP task passing by 14 years old in the Tainae community of Papua New Guinea.
Of course not finding something isn’t proof it isn’t there!
Robert & Kristin –
Have you seen this new work on Samoan populations?
https://jbd.sagepub.com/content/37/1/21
cool, thanks! I hadn’t seen that.
Sorry, Kristin, I seem to be working my way through your posts backward – typical blog mistake!
I actually think the key to unwinding all these issues regarding FP and animal cognition lies in generalizing the growing body of research into heuristics, particularly the emphasis on information neglect you find in the work coming out the Adaptive Behaviour and Cognition Research Group and the Heuristics and Biases program. Belief-talk solves first-order problems via the neglect of tremendous amounts of information. It relies quite heavily, in other words, on information structures present in various problem-ecologies to solve problems. So the question is, why should anyone assume that the general question of cognition – animal or human – possesses the kind of information structure that belief-talk is adapted to solve?
To me, the answer is glaringly obvious, and as an outsider I’m always amazed that no one ever seems to bother considering it. Given that belief talk solves problems by *ignoring what is actually going on,* why should we expect it would provide any purchase on what is actually going on?
It strikes me as a garden variety heuristic misapplication, actually. But, as I mentioned earlier responding to a later post (!) going normative raises a parallel problem. I think there’s a lot to learn pushing in this direction, but what we do learn will be specific to the kinds of intentional conceptualizations we use, and occult beyond that. Personally, I think the deployment of FP categories in these research domains requires an over-arching commitment to eliminativism – if only to keep ourselves honest to the heuristic limitations of our intellectual apparatuses!