In our daily interactions with people—driving down the street, coordinating childcare, figuring out how to hide from an old girlfriend, buying a nice gift—we rely on folk psychology, our unschooled understanding of other people. These abilities are often attributed to a single mechanism often thought to be unique to the species—known as mindreading, belief reasoning, or theory of mind. But that’s just too much work for a single mechanism to do!
Pluralism in folk psychology is a rather vague position, because it amounts to saying it’s all very complicated. But I think it’s a necessary start. In my book Do Apes Read Minds?, I make the case for pluralism, but also offer a starting point for mapping the kinds of folk psychological practices we engage in and the different kinds of processes that we use to do them. The book grew out of frustration with the common claim that belief reasoning is essential to human social interaction. We see that idea in Davidson, whose view entails that to think one must have the concept of belief. It’s also attributed to Grice, though Richard Moore argues that Gricean cognitive criteria for communication are less demanding than often thought—and that apes can meet these more minimal criteria; see his excellent paper “Enacting and Understanding Communicative Intent”, part of a larger project that I hope we’ll get to see more of soon.
Whatever the reason for holding the view that thinking about others’ beliefs is central to our social interaction, there are reasons to reject it. Little kids first think about what people should do based on who they are, and what they have done (and I highlight a lot of different research programs in social and developmental psychology that defends this claim in my book). What looks like belief reasoning may not be, but to see that we have to turn away from language and linguistic analogies such as propositions and examine other cognitive strategies.
Take deception. All kinds of species mislead conspecifics and predators, and humans may also deceive without thinking about belief. The child who lies about eating the cupcake (despite frosting all over her face) may have learned that every time she does something she was told not to do she gets punished, and she doesn’t want to get punished, so she acts as though she did not eat the treat. A sneaky baboon female might hide from the dominant when mating with a subordinate male, and she might suppress her typical sex call. She doesn’t need to think about what the dominant believes, but she just needs to have formed an association between forbidden sex and a beating, conjoined with a desire to avoid the beating.
And take false belief situations. Children may be able to say that Sally will look for the ball in the box, even though Anne moved it to the cupboard and it isn’t in the box any more, because they learned a generalization that people look for objects where they left them. The cultural differences in the age at which kids pass this false belief task may reflect differences in how well this generalization is learned. In industrialized societies—east and west—our homes and daycares are full of artifacts that kids get to manipulate, and important objects are frequently misplaced. Mom looks for her keys where she left them every time she leaves the house, and even if the toddler got her hands on them, Mom still reaches for her keys on the empty hook. If passing the false belief task is related to kids’ experience and interest in others’ manipulating artifacts, this might explain why we see kids in small-scale societies only passing the false belief task years later, if at all.
Similarly, that children with older siblings tend to pass false belief tasks at a younger age may be explained by increased motivation and opportunity to see objects being placed, retrieved, hidden, moved, and so forth, rather than a greater ability to theorize about invisible mental states.
So we might not need to engage in belief reasoning in these cases, but do we? We could test the hypothesis that we are not using belief reasoning in false belief situations by examining the cultures in which it develops late to see if there is a correlation with the frequency of artifact manipulation. Developing and testing alternative hypotheses is needed to defend the belief reasoning explanation for success on the classic false belief task—in the same way that Steve Butterfill and Ian Apperly offer an alternative explanation for infants’ success on the nonverbal versions of the false belief task, and a means for testing the hypothesis (see the symposium on their paper “How to construct a minimal theory of mind” held on Brains last year).
The prima facie interpretation of passing the false belief task in terms of belief reasoning has always stuck me as rather odd. Kids who pass the false belief task still struggle with the opaque nature of belief, and how belief is responsive to reasoning. And kids don’t tend to talk about beliefs in these conditions when prompted to explain their choices; in a study I did back in grad school, we found that no 3 year-olds and only 15% of 4-5 year-olds said anything that could be interpreted as belief attribution when prompted, and more than 80% of 3 year-olds and more than 60% of 4-5 year-olds simply referred to the details of the situation (Andrews and Verbeek 1999).
We could also test the hypothesis that kids are using belief reasoning to pass the false belief task by looking at processing speeds in false belief situations, the way Ian Apperly and colleagues have done. They found that when adults watch a false belief (or true belief) scenario, they are slightly slower at responding to a probe question about the actor’s belief than they are at responding to a probe question about the location of the object (Apperly et al. 2006; Back and Apperly 2010). So maybe mindreading isn’t omnipresent in our social interactions after all.
But we do mindread…so what drives the ability?
Rather than thinking that we mindread to predict behavior, I suggest that we develop mindreading to explain anomalous behavior. When someone does something truly unexpected and out of character, it is hard to know what is going to happen next. We can’t predict what the person is going to do next based on our knowledge of their personality traits or stereotypes or social norms, because the behavior doesn’t cohere with our model of the person.
When we see anomalous behavior, and want to engage with that person further, we have to try to understand what is going on. Understanding this kind of situation requires telling a story about what is going on and then acting according to the story. This puts one in a state of explanation-seeking. And since the behavior doesn’t cohere with what we would expect, we would find value in an explanation in terms of invisible motivations or representational states.
When we see anomalous behavior, and don’t want to engage further with the person, we don’t care to explain in terms of beliefs. We are happy to call the person names, using negative trait attributions or stereotypes. This makes the drive to explain behavior a normative drive, one that evaluates as it explains. People don’t like to have reason explanations for the acts of terrorists or monsters. Explaining is so close to justifying—we often see people offering reason explanations for bad guys portrayed as offering sympathy. On September 24, 2001, the New Yorker published an essay by Susan Sontag in which she was looking for reason explanations behind the actions of the 9/11 terrorists. In response, many media outlets portrayed Sontag as no better than a terrorist, and the New Republic published an article starting with the question “What do Osama bin Laden, Saddam Hussein and Susan Sontag have in common?”
Belief reasoning is a small part of folk psychology that allows us to make sense, and then justify, the strange actions of others. By explaining your action, I am pulling you closer into my community, and by refusing to offer a reason explanation I exclude you. And if you want to be in my community, you act so that you make sense to me, and when you act outside my expectations, you want to have your behavior explained in terms of reasons. A weakening of community cohesion occurs when someone acts outside the norm, and belief reasoning is there to shore up that weak spot.
My emphasis on the normativity of explanation in folk psychology aligns me with the views of Tad Zawidzki in his book Mindshaping, though I reject his claim that only language users are folk psychologists [note: understood narrowly as mindreaders]. The normativity also aligns me with Victoria McGeer, who writes, “our folk-psychological competence consists in our aptitude for making ourselves understandable to one another, as much as on our aptitude for understanding one another. And we do this by making (self and other) regulative use of the norms that govern appropriate attributions of a range of psychological states” (McGeer 2007, 148).
So humans can explain behavior to keep the group together. But can other creatures? Explaining without language will be the topic of my next post.
Andrews, K., & Verbeek, P. (1999). Is theory of mind in young children associated with peer interaction? Presented at the International Society for Human Ethology, Vancouver, BC.
Apperly, I. A., Riggs, K. J., Simpson, A., Chiavarino, C., & Samson, D. (2006). Is Belief Reasoning Automatic? Psychological Science, 17, 841.
Back, E., & Apperly, I. A. (2010). Two sources of evidence on the non-automaticity of true and false belief ascription. Cognition, 115(1), 54–70.
McGeer, V. (2007). The regulative dimension of folk psychology. In D. D. Hutto & M. Ratcliffe (Eds.), Folk Psychology Re-Assessed (pp. 137–156). Dordrecht, The Netherlands: Springer.