Among philosophers, autonomous agency usually requires having some kind of metacognitive awareness that permits thinking about one’s reasons for action and being self-governing. Autonomous agency, then, requires metacognition. But it requires metacognition of a certain sort, namely beliefs about beliefs.
The question about whether any other animal is metacognitive has been of great interest to psychologists of late. Michael Beran, Robert Hampton, and David Smith are among those who have been looking at animal metacognition (mainly in apes, monkeys, and a dolphin). We have evidence that some animals can evaluate their own memories, deciding not to take a test when they can’t remember the correct answer. We have evidence that some animals can evaluate their own evidential state, deciding not to take a test when they don’t have the information they need to give the correct answer. And we have evidence that some animals are able to delay gratification in their own versions of the marshmallow task.
However, it isn’t clear that these tasks are metacognitive in the sense that the philosophers are usually interested in (while they are still metacognitive in the sense that Joelle Proust defends in her recent book; see also this anthology Proust edited with Beran, Josef Perner and Johannes Brandl). Such behaviors need not be the result of beliefs about beliefs. Peter Carruthers offers two alternative explanations for these kinds of studies: strength of belief such that stronger beliefs trump weaker ones (Carruthers 2008), and feelings of not knowing how to go on, or of indecision (Carruthers and Ritchie 2012).
But, if it isn’t clear that the animals are thinking about beliefs when they make their choices about how to act, it isn’t clear that that humans are doing that either. The requirements for autonomous agency might be so demanding that they are rarely met even by adult humans. These sorts of worries are what led Santiago Amaya and me to start working an alternate account of agency—a kind of agency that is met by most humans and perhaps many other species. We suggest that, “An agent, in the most general sense, is an organism capable of grasping norms and modifying its behavior to be in line with those norms.”
Notice that the kind of normative sense needed for agency is a bit more demanding than naïve normativity, which can be automatic. To be an agent, Santiago and I think, means you have to be able to recognize when you get things wrong, and perhaps to be motivated to repair the error. So, in the paper we’re writing called “Animal Agency” (coming soon to a journal near you, we hope!) we propose looking at agency in animals using a mistake-based methodology. The methodology is inspired by Santiago’s awesome paper “Slips” (which you should read if you haven’t already).
The idea behind the mistake-based methodology is this—if you make a mistake, and repair the mistake, you have to have a grasp of norms and motivation to act in line with those norms. So we can look for examples of errors and response to errors in animal action. I appealed to my community of animal researchers to find examples of animals responding to errors without being first informed that they made an error. We got some good stories.
Laura Adams told me that the orangutans she’s been working with at the Toronto Zoo sometimes make errors on easy tasks, and will often quickly try to correct the error, sometimes wincing or making what she calls a “d’oh” facial expression.
Noam Miller related one of Konrad Lorenz’s stories about Lorenz’s house-trained goose. Every evening, Lorenz would let the goose inside, the goose would walk up to the mirror, then go upstairs to bed. One day Lorenz forgot to let in the goose until quite late, and when he opened the door the goose ran straight upstairs, but then paused, went back downstairs over to the mirror, before returning upstairs. A goose tick?
Anne Russon gave us a number of good stories, one about an orangutan named Siti who started eating rattan incorrectly, 4 leaves down rather than 3. As Anne watched Siti “[m]outh open and getting ready to bite, she paused, looked at the tip of the rattan, moved up one leaf, then bit there. Spotted her ‘error’, apparently, and fixed it.”
Jennifer Vonk directed us to Rob Hampton and Benjamin Hampstead’s (2006) study of the qualitative differences in a monkey’s answers in a match-to-sample task. They found that when the monkey hit the screen, he was getting answers right only about half the time—he was guessing—and when the monkey gently touched the screen he largely got the answer right.
There are also good stories in researchers’ memoirs of their work with animals. In her book Dolphin in the Mirror, biologist Diana Reiss relates how teaching a norm to a dolphin named Circe resulted in Circe turning it around on Reese—or so it seemed. Reiss used a “time-out” with Circe when she responded incorrectly to a signal; she would step back from the tank and break social contact. Circe was just learning these signals, as she was wild caught. She was also just learning how to eat dead fish. But Circe decided she didn’t like the spiny tails, so Reese took to trimming the tails so that Circe would eat. Reiss tells the story:
One day during a feeding I accidentally gave her an untrimmed tail. She immediately looked up at me, waved her head from side to side with wide-open eyes, and spat out the fish. Then she quickly left station, swam to the other side of the pool, and positioned herself vertically in the water. She stayed there against the opposite wall and just looked at me from across the pool. This vertical position was an unusual posture for her to maintain…I could hardly believe it. I felt that Circe was giving me a time-out! (Reiss 2011, 75)
This incident led Reese to formally test Circe’s response to being fed uncut fish, and she found that Circe responded with a time-out every time (Reiss 1983).
We’ve been collecting such stories in order to identify different varieties of agential actions. Are we’re still looking for more, so if you have any ideas, please let us know!
Recognizing and repairing errors demonstrates responsiveness to norms. Whether or not it involves beliefs about beliefs is irrelevant. Maybe it does, but it doesn’t need to. Recognizing and repairing errors is key to being an agent, and if these incidents stand up to further scrutiny, agency may be more widespread among species than usually thought.
Note: Thanks to Dale Smith for the photo.
Hi Kristin,
Thank you for another great post! I’m sorry for being a bit late to the party — conference-related travel has unfortunately prevented me from joining the discussion so far. I hope to be able to go over your previous posts and discussion (and maybe add something to the points raised already) over the next couple of days.
Anway, with regard to this post, the following thought came to mind: I think you might be right that humans rarely engage in metacognition in the philosophically strong sense either. Indeed, there is a recent very nice paper by Kornell pointing out that research in social psychology indicates that when it comes to the experimental paradigms usually employed in studying metacognition, humans seems to rely on certain cues (such as ease of processing of the stimulus or their own reaction time) that may or may not be correlated with their actual performance, rather than directly accessing a first-order mental state (such as a memory). See Kornell, N. (2014). Where is the “meta” in animal metacognition? Journal of Comparative Psychology, 128, 143-149. doi:10.1037/a0033444
Still, even if this is so, why shouldn’t we nonetheless hold on to the claim that (human) agency in its paradigmatic form does require reasons-responsiveness? Also, some would argue (following Anscombe) that, the research I just mentioned notwithstanding, for most of their actions people are in fact able to provide reasons. When I ask you why you crossed the street or went to the kitchen, you’ll usually be able to give a response in terms of a reason. Where you aren’t, we might have good reasons to think that your action was not, in fact, under your agential control. So I suspect that many philosophers would not want to give up the conceptual link between agency and reasons-responsiveness.
But then, perhaps that’s not quite the claim you want to make? So let me ask: is the idea that rather than thinking about agency in terms of reasons-responsiveness/metacognition, we should think of it in terms of “grasping norms and modifying ones behavior to be in line with those norms”? Or is the claim that — just as there might be different systems for mindreading (in the sense defended by Butterfill and Apperly) — there might be different systems for agency, such that we can distinguish a basic or minimal sense of agency (understood in terms of the ability to grasp norms in a nonconceptual way) and full-fledged ageny (understood in terms of metacognition in a strong sense)? If the latter, a natural next question would be what the relation between these is. Would one be a necessary precursor to the other? And, to perhaps relate this to some of the previous posts/discussions, would we need language for full-fledged agency, but not for “mere” animal agency?
Thanks Kristina, nice questions. And thanks for reminding me to read the Kornell paper!
My usual methodology is to look at humans kids rather than adults. If kids aren’t very good at offering reasons for their actions until middle childhood, then there is some interesting precursor to full fledged autonomous agency that differentiates these kids from, say, rocks! And it’s that stage that I want to understand better. It is related to normativity, certainly, but it’s a step past naive normativity because agents have to be able to realize that they acted improperly, and repair the error. So yes, I’d we happy to agree that agency is at bottom “grasping norms and modifying ones behavior to be in line with those norms.” Your question about the relationship between the basic agency and autonomous agency is a good one, and I suspect that there are different models that would fit with the existence of this developmental situation. It is probably very typical developmentally to have basic agency before autonomous agency, but gather evidence for this we’d have to look cross culturally.
I can see the advantages that describing metacognition in folk-normative terms has over describing them in folk-epistemological terms: even if ‘belief’ weren’t prone to collapse into intellectualism (thus circling the wagons around the human), it just seems far too granular to accommodate the continuum of myriad ways species use auto-feedback to modify their behaviour. So I’m willing to grant that ‘norm-talk’ offers more versatility than ‘belief talk’ in this important regard.
But the thing to remember is that the suite of systems involved in normative cognition are heuristic: they solve problems by neglecting information. This means that the threat pertaining to any theoretical application of normative cognition is that the information ignored is actually critical to the solution of the problem at issue.
The problem isn’t that ‘human reason is paradigmatic’ (I frankly don’t think this tradition is worth worrying about) but that normative cognition is specifically adapted to solving human problems, that it is parochial, and that even though it seems to provide traction in nonhuman instances, it does so only because it is cherry-picking human-like features from a nonhuman context. This is almost certainly the case with the anecdotal evidence of metacognition you cite, for instance.
But the very ease with which we anthropomorphize nonhumans suggests that it really does work, that it allows us to successfully navigate our encounters with animals. The million dollar question is whether it also allows us to *theorize* them. The only way to know this is to see how well your normative posits can be operationalized in experimental contexts. I think you can remain entirely agnostic on the questions of ‘What norms are?’ or (worse yet) ‘What reason is?’ and still pursue this fascinating project. The hope is that your theoretical assays will actually bring everyone closer to answering those questions. It doesn’t make much sense to suppose they have to be answered before you can start.
As a happy coincidence, Rob Hampton gave a talk at U of T Friday and talked about all the very cool work on metacognition coming out of his lab. Some of it is in press (like the Basile et al. paper coming out in J. Experimental Psychology: General titled “Evaluation of seven hypotheses for metamemory performance”) but a lot of it is not. They have a lot of evidence for some kind of active cognitive control in monkeys, including correcting ignorance, terminating work when they have enough information, and anticipating future need. They also have evidence that sort term memory is disrupted by competing cognitive effort.
So there is also good experimental evidence, not just anecdotes, that some species (rhesus macaques, at least) have the capacities Santiago and I are interested in as far as agency goes. That’s really cool. But the nature of the cognitive mechanisms that support these capacities is still a live issues, I think.
Rob thinks his data supports the existence of metacognition in (what we would call) the belief about belief sense. But if we understand belief as a representational state, then there are competing explanations still, I think, along the lines of what Carruthers has suggested. On the other hand, if we understand belief in interpretationist terms, or dispositional terms, then interpretation of the data is more interpretive, and normative.
In conversation after the talk, a few of us were plotting about running some of the metacognitive experiments on monkeys at York, and maybe on pigeons, too. So we might have some local research to draw on too!