Symposium on Butterfill and Apperly’s “How to Construct a Minimal Theory of Mind” (Mind & Language 28, 606-637)

With apologies for the delay, I’m glad to begin our next Mind & Language symposium, on Stephen Butterfill and Ian Apperly’s article “How to Construct a Minimal Theory of Mind”, with commentaries by Hannes Rakoczy (Göttingen), Shannon Spaulding (Oklahoma State), and Tad Zawidzki (George Washington University).

Steve and Ian’s article, which is freely available here thanks to the kind cooperation of Wiley-Blackwell, explores what it would take for a being with minimal cognitive and conceptual resources to track others’ perceptual states, knowledge, and beliefs, but without representing propositional attitudes as such. Together with the commentaries, their article develops this “minimal theory of mind” in dialogue with a large body of findings from developmental, cognitive, and experimental psychology, including data on the mindreading capacities of human infants and non-human animals.

Below you will find a short précis of the target article, followed by links to the paper and commentaries, and then a set of replies. Comments on this post will remain open for at least two weeks. Many thanks to all those who participate!

* * *

Précis: Stephen Butterfill and Ian Apperly, “Introduction for Discussion of Minimal Theory of Mind”

Commentaries:

Target Article: Butterfill and Apperly, “How to Construct a Minimal Theory of Mind” (Mind & Language 28 (5), 606-637)

Replies: Butterfill and Apperly, “Replies to Three Commentaries on Minimal Theory of Mind”

 

37 Comments

  1. Thanks so much to the organizers and participants in this symposium. What a great exchange! I feel that we are finally getting clear on some issues that have plagued this debate for decades, and finally articulating some empirically-motivated middle-ground theories. There still seem to be however a few issues that we must still confront, brought out most starkly by Dr. Spaulding’s commentary and the authors’ response. Namely, the behavior-reading challenge cannot be defeated by articulating rich, flexible, empirically plausible theories of social cognition. Rather, it can only be defeated by also directly confronting its internal faults and implicit philosophical commitments.

    I agree with Dr. Spaulding and Dr. Zawidzki’s concerns that the target article does not yet provide a convincing reason why the minimal theory of mind should be regarded as distinct from behavior-reading. My own view is that the behavior-reading challenge has been an immensely useful whetstone against which to sharpen specific theories of ToM, but that it is ultimately not a coherent theory of social cognition in its own right. The two fatal flaws for behavior-reading doctrine are that first it is specified in such an open-ended way that an ad hoc behavioral rule can be created for nearly any experimental data (as the behavior-reading theorists have found again and again as they attempt to describe an experiment that could overcome their own challenge); and second and relatedly, that it has always been based on an implausibly restrictive theory of representation. I think that Fletcher & Carruthers have pressed home the first point adequately in their 2012, so I will not mention it further here. It is the second issue that I think still requires more concern and that I will focus on below.

    To my mind, the clearest and most general versions of the behavior-reading doctrine have been developed by Penn & Povinelli (2007) and further by Lurz (2011), so I will focus on these variants below (conceding that there are several other important variants out there–though the fact that the doctrine includes such diverse views it itself cause for concern). I’ll scaffold with the above discussion to save time: In response to Spaulding and Zawidzki, the authors contend that their minimal theory should be regarded as a theory of mind because registration is an intervening variable in Whiten’s sense. Noting that registration is a state doesn’t help, because registration can itself be tracked using configurations of behavioral/situational cues. (Indeed, the target article gives us a recipe on how to do this by specifying registration’s particular functional profile.) Neither does it help to call registration a minimal mental state, because again the question is not whether registration is a sufficiently mental state but whether animals actually represent distal registration or merely its proximal observable concomitants. The problem, here and elsewhere, is that the debate still hasn’t faced up to the problem of selecting an ecumenical theory of representation. The behavior-reading theory suggests that any distal state that can be tracked in terms of some configuration of proximal behavioral/situational cues should be counted as a representation of those proximal cues rather than the distal state. (Whiten’s treatment, to which skeptics typically defer as a minimal genuine ToM, offers some additional ideas, such as that an animal can count as representing an underlying intervening variable when they are able to link sufficiently perceptually disparate situations, but I think the present generation of “middle-ground” views all lie uncomfortably near the penumbra of this vague criterion.) The target article moves the discussion from criteria for representing _seeing_, _believing_, _desiring_ as such to criteria for representing _registration_. This is certainly a welcome development, but the underlying semantic problem remains the same.

    I don’t think anyone can be blamed for hoping that the notorious problems in the theory of representation could be bracketed for the purposes of doing philosophy of social cognition, but I think a close examination of the behavior-reading challenge demonstrates, unfortunately, that this hope must be frustrated. (I think this depressing observation holds true for many protracted debates in cognitive science; insofar as a cognitive state is individuated by its representational contents, an empirical debate is implicitly committed to dealing with differences in underlying theories of representation.) A surprising bit is that when skeptics gesture at theories of representation (e.g. Penn & Povinelli 2007’s allusion to Dretske 1988), they point at comparatively promiscuous informational theories that would count fairly minimal behavior-reading mechanisms as genuinely representing distal mental states. For example, Dretske 1988’s theory takes instrumental learning as the base case of genuine representation, and it should be a great surprise to the animal and infant social cognition literature if it turns out that all a subject needs to represent belief is instrumental learning. Surely this can’t be what the debate was about all along. I’ve suggested additional criteria in my own M&L paper on the topic (forthcoming), the key idea (inspired instead on the much more plausible theory of Dretske 1986) being that we should regard a state as representing a distal mental state when it the animal displays a kind of open-ended ability to learn new cues as indicators of a distal mental state. If the explanandum includes also the trajectory of learning, this finally seems to put a stake in the heart of the behavior-reading challenge, since an ability to recruit new cues cannot be explained in terms of any particular set of proximal behavioral/situational cues. One might elaborate Apperly & Butterfill’s theory, then, by saying that bearers of their minimal ToM should, ceteris paribus, be able to recruit some set of new cues as indicators for distal registrations. (This direction does, however, give us a few additional burdens, in that empirically distinguishing a minimal ToM from behavior-reading will turn on more fine-grained aspects of the ontogenetic data. We shall have to take into account what animals can learn and when, instead of just what they can do at some particular time of testing.)

    That all said, I join with Zawidzki in thinking that none of this should diminish our excitement in the excellent, specific, and empirically-plausible theory of social cognition that Apperly & Butterfill have given us here. I offer the elaboration only to help prevent this behavior-reading challenge–which is useful when properly understood, but which can become unproductive when applied without confronting the underlying issues in the theory of representation–from distracting us from its many strengths.

    Buckner, C. (forthcoming). The semantic problem(s) with research on animal mindreading. Mind & Language.
    https://cameronbuckner.net/professional/semanticproblems.pdf

    Dretske, F., (1986) Misrepresentation, in Radu Bogdan (ed) Belief: Form, content and function, New York: Oxford: 17–36.

    Dretske, F. I. (1988). Explaining behaviour: Reasons in a world of causes. MIT press.

    Fletcher, L., & Carruthers, P. (2012). Behavior-reading versus mentalizing in animals. Agency and Joint attention. Oxford University Press, Oxford.

    Lurz, R. W. (2011). Mindreading animals: the debate over what animals know about other minds. The MIT Press.

    • Steve Butterfill

      Cameron : Thank you, this is very interesting. First I want to check I understand. There are various objection associated with behaviour reading; three are:

      1. Is the conjecture that a particular group of subjects are using minimal theory of mind conceptually distinct from the conjecture that they are engaged in behaviour reading?

      2. Is the conjecture that a particular group of subjects are using minimal theory of mind empirically distinguishable from the conjecture that they are engaged in behaviour reading?

      3. Does the currently available evidence favour the conjecture that a particular group of subjects are using minimal theory of mind over the conjecture that they are engaged in behaviour reading?

      When we replied to Shannon Spaulding, we took her objection to be (1). By contrast, I take your comments to focus on (2) and (3). In arguing that registration is a mental state or very like one, we were attempting to address (1). I agree that this does not help to address (2) and (3) for the reason you give: ‘registration can itself be tracked using configurations of behavioral/situational cues.’

      Your comment has made me see something that I hadn’t seen before (but probably should have seen in thinking about Tad Zawidzki’s objection): how one answers to question (2) and (3) may well depends on views about the nature of representation generally. I need to think more about your specific proposal about learning, this is very interesting.

      What do you think about our suggestion that altercentric interference effects are evidence against behaviour reading?

  2. Tad Zawidzki

    I guess I’m a little puzzled by both Shannon’s assimilation of Steve and Ian’s view to a theory o behavior, and by Cameron’s skepticism about the proposal, which seems motivated by it. I think the issues here are fairly straightforward.

    We can determine what kind of theory some social cognizer employs in the following way. We ask: on what basis does s/he group different kinds of behavioral evidence into equivalence classes? Does she use their purely observable, physical properties to create equivalence classes on which to base her responses to conspecific behavior? Does she use hypotheses about their mental/cognitive etiologies to do so? Or does she use some other principle?

    The main arguments against proponents of theories of behavior all presume that the behavioral strategy involves grouping interpretive targets’ behaviors on the basis of their observable physical properties. Think of Gopnik and Meltzoff’s famous example of a dinner party – surely, they say (paraphrasing), we don’t see others as skin bags expelling noise from and shoveling food into orifices. They think the only alternative to this is to see them as minded agents. Rather than classify the behavior of our dinner party guests as fluctuations of a skin bag’s orifice, we classify it as expressions of desire, belief, etc.

    Given this false alternative, it’s clear that theories of behavior are inadequate, for reasons that Whiten articulated: we would miss important patterns in behavior. Two acts that both count as expressions of the same desire or intention or belief, might have nothing physical in common. If we grouped others’ behaviors into equivalence classes based only on their observable, physical similarities, we would miss commonalities important for future social interaction, i.e., by treating physically similar yet psychologically distinct behaviors as the same, and psychologically similar yet physically distinct behaviors as different.

    But we needn’t agree that grouping behaviors based on mentalistic similarity is the only alternative to grouping behaviors based on observable, physical similarity. What if we grouped behaviors based on teleological similarity? E.g., two behaviors belong in the same equivalence class, for the purposes of future social interaction, if they aim at the same goal, understood as a non-actual, future state of affairs. They needn’t be physically similar to count as teleologically similar. E.g., a conspecific might have physically distinct ways of having food retrieval as its goal. In addition, behaviors might be grouped together based on a non-mentalistic notion of epistemic access. E.g., seeing food hidden by looking over one’s shoulder vs. looking at it directly would belong in different behavioral categories, but in the same epistemic access category. I think this is sort of what Steve and Ian mean by registration.

    I think non-mentalistic teleological and epistemic categories could avoid a lot of the problems with behavioristic categories pointed out by Whiten. They constitute a classification of behavioral evidence that is distinct from one based on physical properties, and that is often highly relevant to future social interaction. But they needn’t be mentalistic. We might just have an innate propensity to see different behaviors as aiming at the same goal (perhaps due to assumptions of efficiency of approach, a la Gergely & Csibra), and another innate propensity to see different behaviors as sufficient for epistemic access (i.e., what Steve and Ian call “registration” – all this means is that states of affairs to which an interpretive target is appropriately related, e.g., through line of sight, become relevant to figuring out what will constitute the most efficient means to accomplishing its goals).

    Here we achieve the kind of information compression Whiten thinks we gain through mentalistic lenses, but non-mentalistically. There is no need to investigate the mental etiology of these behaviors to see them as belonging to the same equivalence class. To the social cognizer relying on registration attributions, different ways of glancing at the location of a banana all count as registering the same banana not in virtue of causing the belief that the banana is there, but in virtue of relating the interpretive target to the banana in salient ways (e.g., via line of sight). To the social cognizer relying on goal attributions, different ways of approaching the banana all count as having banana retrieval as the goal not in virtue of being caused by the desire for the banana, but in virtue of being the most efficient means to retrieve the banana given contextual constraints (including the target’s recent registrations).

    So, as I understand it, the proposal is the following. Unlike theory of behavior (which groups disparate behaviors based on their observable physical properties), and full-blown theory of mind (which groups disparate behaviors based on their cognitive etiologies), minimal theory of mind groups behaviors based on abstract relations to actual (successful registrations) and non-actual (failed registrations and goals) states of affairs. Such groupings can effect many of the same savings in encoding to which Whiten appeals, without requiring anything as sophisticated as a mentalistic theory of the etiology of behavior. So you get the best of both worlds!

  3. Tad Zawidzki

    Just a couple of other questions that occur to me, upon rereading Steve and Ian’s replies.

    First, I’m still not sure how to experimentally distinguish their proposal from the more Fodoresque one, on which all the signature limits revealed in experiments show only general performance limitations on a full-fledged mind reading competence.

    Second, I’m not sure why ability to manipulate some factor predictive of behavior is sufficient for some capacity to count as a theory of mind. Surely scientific behaviorists like Skinner successfully manipulated purely behavioral variables to control the behavior of their subjects. And what about mother birds feigning broken wings to distract predators from nests, or chimpanzees concealing erections or food caches or copulation cries? Are they all employing a theory of mind just because they’re manipulating relations between interpretive targets and salient environmental states or events? If all the properties that minimal mind readers represent might be observable, as Steve and Ian concede in their replies, I can’t see why the fact that they can be manipulated is enough to make them mental. I think they needn’t be behavioral, but surely our concept of mind connotes, at least, some unobservable causal factor between stimulus and response, no?

    • Steve Butterfill

      On your first question: How can we distinguish the conjecture that a particular group uses minimal theory of mind from the conjecture that this group uses full-fledged theory of mind? We say: the former conjecture generates predictions about signature limits which are not predictions of the latter conjecture. But you’re right, of course, that the latter conjecture is consistent with there being some limits on performance. This is a point we skipped over in our replies. Fortunately it’s not too difficult to handle.

      Broadly, there are two types of performance limit. One type is a limit deriving from things like the capacity of working memory, or abilities to understand a question. One way to deal with this limit is to set up two tests which are a similar as possible except that one involves tracking belief whereas the other doesn’t. (Compare Gopnik, Slaughter and Meltzoff 1994, p. 177: ‘Children in our studies … could answer questions literally identical to the false belief question, in tasks of indistinguishable performance complexity.’) Another type of performance limit concerns subjects’ understanding of a domain. If the subjects have no conception of market value, it won’t be informative to show that fail to track beliefs about market value. We can deal with this kind of performance limit by testing subjects’ understanding. In the particular case of Hannes Rakoczy et al’s experiment where younger children fail to track false beliefs involving mistakes about identity, one worry was that the children might fail to understand identity — in which case it wouldn’t be informative that they failed to track false beliefs involving mistakes about it. Fortunately there is good evidence that even younger children (14 month old infants) do understand enough about identity (Cacchione, Schaub & Rakoczy 2012).

      This is how to distinguish experimentally between the conjecture that infants are full-fledged mindreaders subject to performance limits and our conjecture that they use minimal theory of mind. I’m not suggesting that there is decisive evidence either way yet. It’s just that (1) the conjectures are empirically distinguishable and (2) there is enough evidence to take the minimal theory of mind conjecture seriously.

    • Steve Butterfill

      On your second point. I agree (of course) that not all causes of behaviour are mental. So, as you say, it’s not true that manipulating just any variable in order to alter another’s behaviour amounts to using a theory of mind. I think we’ve misunderstood each other. The challenge we took from your commentary was to explain why someone using minimal theory of mind is treating registration as an intervening variable. Our reply had three parts: registrations are distinct from things like encounterings, they are distinct from goal-directed actions, and they are treated as caused by the former and causes of the latter. These three things together entail that registrations are intervening variables (in the sense we offered). So it isn’t just that they are causes.

      Even so, your general line of objection is surely right. The examples you’ve given have a structure like A->C, so do not seem to involve intervening variables. But there are surely examples with a structure like A->B->C where A and C are behaviours and B is clearly non-mental (for instance, B might involve ingesting something). So we (Ian & I) shouldn’t accept Whiten’s intervening variable criterion as sufficient for theory of mind. We did do this in our paper; now I admit we were wrong. Whiten is surely right that intervening variables are a core feature of any theory of mind; the problem is just that not all intervening variables—and probably not even all unobservable ones—are mental states.

      Does this mean we should give up the claim that minimal theory of mind is a theory of mind? I don’t think so. In one of our replies to Shannon Spaulding (§3.2), we offer an alternative line of argument, one that doesn’t depend on Whiten’s intervening variable criterion. I hope this line of argument will hold up better. In making that argument we’re also adopting an attitude inspired by some of your criticisms: we’re agnostic about where the line between mental and non-mental phenomena should be drawn and whether it might be a matter of degree; we’re merely trying to specify, in the context of research on social cognition, ways in which registration is relevantly like paradigm mental states.

      • Tad Zawidzki

        Just a quick question that occurred to me about this. Why aren’t goals intervening variables? Just as registrations are intervening variables between things like encounterings and goal directed actions, aren’t goals intervening variables between encounterings and behaviors characterized non-teleologically? That is, the same encounterings don’t always lead to the same non-teleologically characterized behaviors, so you need an intervening variable, i.e., the goal, to mediate between the two. Isn’t this just like the job played by registrations, which can explain cases where similar encounterings don’t lead to similar behaviors, even if goals are held fixed?

        • Steve Butterfill

          I’m mostly agree. Short version: it’s not the goal but a relation between the action and the goal that is an intervening variable.

          Longer version …

          First a tiny terminological clarification. We’re using ‘goal’ to refer to the outcomes to which actions are directed. So strictly speaking goals aren’t intervening variables between behaviours.

          Now note that someone might observe an action and represent the goal (i.e. outcome) to which it is directed without representing the action as directed to the goal. (Perhaps this happens sometimes when motor representations of outcomes occur in action observation.)

          Suppose I not only represent an action and the outcome to which it is directed but I also represent a relation between the action and the outcome. The obtaining of this relation is an intervening variable between encounterings and mere bodily movements and configurations for the reasons you give.

          This is maybe less exciting than it could appear because we’re thinking of minimal theory of mind users as representing relations between actions and goals rather than between agents and goals. Of course representing relations between agents and goals would amount to representing a state like intention, which is perhaps a more interesting candidate for an intervening variable. And there might be evidence that would push someone defending a conjecture about minimal theory of mind to revise the theory to include agent-goal relations.

          • Tad Zawidzki

            Makes sense, but one wonders how one could represent a goal as a goal (rather than just a situation) non-relationally. Goals have to be goals of something – an agent or an act – to be goals, no?

            I think this distinction between an action having a goal and an agent having a goal is very interesting. How might one experimentally test whether a non-verbal interpreter treats an outcome as the goal of an action rather than an agent, or vice versa? To treat an outcome as the goal of an agent, does the interpreter need a concept of agents qua agents? That is, do they have to understand that targets of interpretation are not just actions, but the persons or conspecifics that engage in them, and that these persons or conspecifics are enduring sources of an indefinite variety of actions?

          • Hm, this is interesting. I wonder: do you and Ian have a view about some of the recent suggestions that a deflationary approach to theory of mind might involve the projection of affordances (e.g. Bence Nanay has been talking about this, and I toy with it a bit). If it’s affordances that are projected, then what’s represented would seem to have to be a relation to a particular agent. (I’m not yet aware of any specific empirical evidence that would help arbitrate between these two views, but the idea of projecting affordances has some phenomenological plausibility on some standard cases…)

  4. Hi Steven,
    Thanks for the reply! You’re quite right that there are distinct metaphysical, epistemological, and methodological issues here that should not be conflated. However, it’s important to point out that the issues with the theory of representation are in the first case metaphysical and only derivatively epistemic and methodological; they are about the conditions that some state needs to meet to count as representing another state of a certain kind. This is so even more deeply than is typically recognized; the methodological issues are typically construed as becoming ontological when combined with a standard appeal to parsimony (e.g. the standard idea that if behavior-reading and ToM are empirically indistinguishable and behavior-reading is simpler, then we should deny the existence of ToM on grounds of parsimony). However, this is not the worry I’m having, for I think the parsimony issue is largely a red herring, but the metaphysical issues remains a serious component of the dispute between animal-and-infant ToM proponents and skeptics. These residual metaphysical complications with the theory of representation have not been widely acknowledged, I think, because when we’re contrasting full-blown theory of mind with behavior-reading, there’s little room for doubt that the two views are metaphysically distinct. But things become more complicated once we start looking for a more minimal or deflationary theory of mind. Let me explain.

    To be clear, let’s set all the epistemic and methodological issues aside for the moment and assume that we have a god’s eye perspective on structures, dispositions, relations—whatever could be relevant to deciding whether some state, structure, or ability counts as behavior-reading or ToM. Focusing on full-blown, metarepresentational theory of mind vs. behavior-reading, there’s little doubt that the two are conceptually distinct; they may be difficult to distinguish experimentally in nonlinguistic subjects, but the first will involve more complex inferential relations to representational concepts like belief that the latter will not. As we start moving downward in the scale of complexity, however, we can no longer take for granted that the increasingly deflationary theory of mind will be conceptually distinct from a behavior-reading capacity that operates on the same cues and posits exactly the same dispositions or causal relations to the environment. Suppose, for example, that we focus on the Whiten criteria, which is the closest thing we have to an ecumenical theory of representation in this debate. In order for some candidate minimal ToM ability to be correctly counted as representing an underlying distal state, it needs to meet some minimal criteria of flexibility. For example, perhaps it needs to unify some sufficiently large set of perceptually disparate cues that can be deployed across some sufficiently large set of perceptually disparate situations. These metaphysical criteria for representing a distal state are themselves fairly vague, however; if all an organism needs to do to represent a distal registration is to have a memory trace for an encountering, and encountering can be understood can be tracked with a small set of behaviora/situational cues, it’s less clear that this will be sufficient to meet criteria for counting as an intervening variable in the relevant sense. As Penn & Povinelli apply the criterion, for example, the subjects would need to display this ability in a fully domain-general way across a wide variety of perceptually disparate situations to count as representing an intervening variable, and given what they’ve said about line-of-gaze in the past it’s not clear that encountering+memory trace will be enough to meet their standard. (Now, I think the right response here is to say that they’ve presupposed an untenably restrictive metaphysical criterion on representation; but I think to make this into a worked-out response, one needs to confront the representational issues explicitly and offer a viable alternative.)

    If this is still a bit unclear, maybe it would help to compare the issue to competence conditions for possessing concepts. Suppose we’re considering whether someone possesses the concept FISH, but, looking from our god’s eye perspective, we see that they are reliably disposed to token FISH in the presence of dogs, cats, and birds. Moreover, they know that dogs are furry, that cats meow, and that birds have wings, and they think that FISH also have all these properties. It would be metaphysically incoherent, I suggest, to assert that such a person could count as possessing the concept fish. Similarly, if we construe “intervening variable” as requiring a high-enough bar, it would seem that registration won’t clear it, because it won’t unite enough cues or with enough perceptual disparity or in a domain-general enough way. Just as we can’t simply assert that such a person possesses the concept FISH, we can’t just assert that a subject who fails to clear the relevant metaphysical bar counts as representing an intervening variable.

    As just a little bit of evidence that this starts to become a metaphysical problem as we start looking for minimal theory of mind, you might compare the dispute between Santos et al. and Penn & Povinelli that I canvas in another paper here (sorry to plug again but it’s faster this way): https://cameronbuckner.net/professional/humesdictum.pdf . As I interpret their disagreement, it’s straightforwardly a metaphysical dispute about the essential properties of ToM, rather than an epistemic/methodological one about proper experimental design.

    I wanted to try to clear this up first, but I did not miss your question; let me think more about the altercentric interference effects, which is highly intriguing, and post again later if I can cobble together some more time to do what I want to do instead of all the other stuff filling up my day!

    • Steve Butterfill

      Hi Cameron,

      Thanks for your reply to my reply, this is very helpful for me. I think I understand your position better now (I’ve been a bit slow).

      Before I think about this more, I want to check that I’ve really got the point …

      We (Ian and I) are committed to holding that there is a distinction between a representation of some behaviour and a representation of a state like registration. I take it that we both agree that we can make this distinction from some kind of representations. (After all, you and I can both make sense of the distinction, think about it and write about it.) The issue is whether we can make sense of this distinction for the particular kind of representations that underlie infants’ or scrub-jays or ape’s theory of mind abilities. (I take it we can assume that representations of something underlie their abilities just to simplify exposition.)

      So the problem arises at the intersection of an empirical and a metaphysical issue. For a particular group of subjects and a particular kind of process, we need to ask, What is the correct metaphysical account of the kind of representations that feature in this process? And your point is that, for all we’ve said, the correct account of this kind of representation might not allow us to make the key distinction between representing some behaviours and representing registrations.

      Have I got it now?

      ***

      PS: I don’t buy your FISH example because I’m one of those awkward folk yet to be convinced that possessing the concept is incompatible with having systematically false beliefs about the thing. Nor have I found a good argument for the claim that possessing the concept is incompatible with having systematically incorrect dispositions to apply it. I’m afraid this makes me awkward, but I think we can probably make progress without having to agree on these points.

      An alternative type of illustration might go like this. We can distinguish motor representation from the kind of representation involved in deciding whether to visit Paris next Spring. There are things that cannot be represented motorically. Plausibly, limits on what can be represented motorically follow from an account of the particular kind of representation that is motor representation.

      • Hi Steve,

        Yes, I believe you’ve got me now. Whether you can make the kind of distinction you need between representing the perceptual evidence for registrations (which can include complex configurations of situational and behavioral cues, on most behavior-readers pictures) and representing registrations (understood as being the kind of state that behavior-readers have denied that animals and infants below the critical age can represent) depends upon your theory of representation. In particular, it depends upon the criteria one requires to represent a distal state rather than just the proximal evidence for that state, which is a very general problem facing the theory of representation. My view is that the solution one adopts to that more general problem will determine how one should interpret the experimental evidence considered in the ToM debate.

        On the FISH example, I didn’t mean that one couldn’t have the concept FISH without having lots of systematically false beliefs about fish–there are stories we can tell here that would make this intelligible–but just that somewhere in all those false beliefs there would still need to be some reliable connection to the property of being a fish. There have to be *some* kind of competence conditions satisfied for a representation to count as being a representation of some distal property–whether those conditions be internal or external, atomist, molecularist, or holist, or whatever. But anyway I believe your illustration works too and may be clearer.

        To sharpen the challenge, it can help to note that the behavior-reading theorists take the class of behavior-reading strategies to include a wide variety of highly sophisticated and flexible cognitive abilities. Consider the following blurb from Penn & Povinelli’s forthcoming “The Comparative Delusion” paper:
        “…we know that nonhuman animals form highly structured representations about the past as well as the occurrent behavior of other agents. They are not limited to learning about statistical contingencies between behavioral cues in an ad hoc or bottom-up fashion. They often do represent the special causal structure among behavioral cues, particularly the goal-directed relation between other agents’ perceptual acts and how those agents are likely to behave. And they are able to generalize this top-down causal knowledge in an inferentially flexible and ecologically rational (i.e., adaptive) fashion.” In fact, the only kind of ability we would *not* be classed a member of this set are those that allow agents to “recognize relational similarity between perceptually disparate behavioral patterns in terms of the common causal role played by some unobservable mental state.” (A little footnote here: I would also quibble with the ‘unobservable’ that they always insist on inserting here, on the ground that it’s a bit behind the times with regard to philosophy of perception–but I think a substantive challenge remains even if we remove it.)

        • Steve Butterfill

          Hi Cameron,

          Sorry for the slow reply, I’m still thinking about the points. We agree that:

          (1) There are theories of representation that allow there to be a fact of the matter concerning whether what’s represented includes registration &c or is behaviour only; call these ‘discriminating’ theories of representation.

          (2) For the thoughts of human adults, a discriminating theory is needed.

          Also, very straightforwardly,

          (3) In cases (if there are any) where the representations which underpin a theory of mind ability are such that the correct theory of representation is non-discriminating, it makes no sense to offer a conjecture about minimal theory of mind.

          So the question is whether for other cases, e.g. the representations that underpin infants’ abilities to track beliefs and other mental states, a discriminating theory of representation is also needed.

          My first thought is this: whatever turns out to be true about the kind of representations that underpin infants’ abilities to represent whatever enables them to track mental states might well be true of the kind of representations that underpin their abilities in other domains, e.g. to track physical objects and their causal interactions.

          If that’s right, applying a non-discriminating theory will create problems not only for a conjecture about minimal theory of mind, but also views on which infants are able to represent intermediate variables in the case of physical objects. So (if I haven’t made a mistake yet), one way to approach your challenge might be to consider evidence from other domains.

          Now I realise that this is going backwards. The idea is that we can work out whether a discriminating theory of representation is needed in a particular case by seeing whether there seems to be evidence to distinguish relevantly different conjectures about abilities. Do we still agree so far?

          • Hi Steve,

            If I’ve understood you, then yes, I think this is all correct. But note that this way of thinking of matters–supposing that the behavior-reading challenge to evidence of animal/infant ToM will be tied to the plausibility of a broader skepticism concerning their ability to represent a particular type of contents in other domains such as causality–is exactly how Penn & Povinelli pitch their challenge. The BBS paper with Holyoak is the place to look–and they think the empirical evidence supports their position that animals and infants also fail to genuinely represent underlying causes, transitive relationships, and a variety of other “second-order” relations that adult humans characteristically represent.

            So, my strategy to rebut the behavior-reading challenge is to note that if we systematically applied the same rigorous skepticism to the cognitive abilities of adult humans, we will typically find that they should also count as being limited to the representation of perceptual appearances only. (The exception may be, of course, experiments where the materials/probes rely upon language–but the behavior-readers crucially do not typically want the only difference between genuine and ersatz ToM to be the presence or absence of language.)

            However, I think that the type of approach you’re sketching here–to consider more broadly what theory of representation would be suitable for many different domains, including e.g. cognition about physical objects and causal relationships–is exactly the right way to go about deciding what theory of representation ought to undergird claims about infant and animal ToM.

  5. It is often useful in considering concepts like these to ask what happens if we try to apply them to a robot or computer program.

    There is surely no doubt that, at least in principle, a robot (controlled by a computer program) could have theory of mind abilities. But could a robot have theory of mind cognition — minimal or otherwise? The answer is not entirely clear, because the definitions involve concepts such as “belief” and “understanding” that are difficult to apply to a computer program.

    It’s important to get this straight, because if we don’t know how to apply the concept to a robot, we surely won’t be able to apply it to animals. For a robot we not only know the behavior, we know precisely how it is implemented. For an animal, we are very lucky if we know anything more than the behavior.

    So the question I am asking is, could a robot, in principle, have minimal or full theory of mind, and if so how would we decide whether it does?

    • This is a great question, I think.
      I would love to see more attempts to implement minimal theory of mind, full-blown theory of mind, behaviour reading, and other variants, because one thing his really forces one to do is commit to the details of how any of these things might work. The outcomes from such modelling/ implementation can sometimes to be hard to intuit or derive analytically, and one frequent outcome is that an apparently simple problem turns out to be rather hard to solve.

      For what it’s worth, my thinking is that the reason why nobody has managed a serious implementation of belief-desire reasoning (to the best of my knowledge) is that it runs into a notorious “hard problem” in AI – the frame problem. Belief inferences, and many other theory of mind inferences, are frequently abductive inferences to the best explanation, which means there’s no principled limit to the information that might be relevant. For anything other than rather simple systems operating in simple AI “worlds”, this means that such inferences are either computationally intractable, or that there needs to be some way of limiting search to just that information that is most relevant. Since that set will vary widely, from one belief inference to the next, this is a genuinely difficult problem.
      My hope is that minimal theory of mind points towards a way of making at least a simple version of theory of mind tractable, because it reduces the complexity of the states that might be ascribed, and assumes that these do not interact in arbitrarily complex ways. But I also think it falls short in spelling out the simplifying assumptions that would be necessary for a tractable artificial implementation. There’s clearly some work to be done here. However, in answer to your question about how one could tell the difference between full blown and minimal theory of mind, the answer lies in the fact that the “simplifying assumptions” that need to be made in order to make a tractable implementation will lead to distinctive signature limits on the kinds of theory of mind processing that the system could achieve.

  6. Hi All,

    Thanks to John Schwenkler for initiating such an interesting discussion! Sorry to be tardy to the party. There’s a lot to process here, and I’m still making my way through much of it. But I thought I’d chime in to try to better articulate the main point in my commentary.

    It seems to me that someone like Povinelli or Perner, who explains non-human animal and (some) infant social cognition purely in terms of behavioral rules, *could* adopt the conceptual schema proposed by Apperly and Butterfill. The goal of such accounts is to explain the social cognition of these creatures without attributing to them conceptual knowledge of mental states. So far as I can tell, there is nothing in A&B’s account that is incompatible with that goal. A&B’s minimal Theory of Mind account does not involve metarepresentation, therefore it does not involve conceptual knowledge of mental states.

    Central to A&B’s account is representing relations amongst agents, objects, and (sometimes) locations. This is not how behavioral rules proponents actually describe the social cognition of infants and non-human animals. It’s not that I think this *just is* the behavioral rules account in different clothes. Rather, the claim is that they could adopt this view without any inconsistency. So, the question is what, if anything, is the theoretical difference between minimal theory of mind cognition and the behavioral approach?

    A&B clearly take their account to differ from a Perner/Povinelli type of account. I’m trying to figure out what exactly differentiates them, that is, what exactly makes this “minimal Theory of Mind” theory of mind cognition at all. Why isn’t it simply theory of mind ability?

    Like Cameron Buckner, I think answering this question requires thinking more about what a mental state is, what a representation is, and the difference between representing a mental state and representing a mental state *as such.* I don’t think we can skirt these questions if we’re trying to discriminate a behavioral account from a minimal theory of mind account. We’ve got to address them head on.

    A&B have some things to say about these issues — I’m not accusing them of completely skirting the issue! — and I’m still processing their responses here and in the official commentary. But I figured it would be worthwhile to articulate the concern again in a slightly different way.

    Finally, thanks to Steve Butterfill and Ian Apperly for such an interesting, thought provoking paper! It’s sure to pave the way for more enlightened discussion of theory of mind. I, for one, have benefited from reading their articles, and I look forward to thinking more about these ideas.

    • Steve Butterfill

      Hi Shannon,

      Thanks for these very useful comments! There’s one thing that we agree on now, I think. We (Ian and I) agree that having minimal theory of mind does not necessarily require any conceptual knowledge at all if infants’ representations of things like number, cause, agency and so on are not conceptual. Of course we probably still disagree on whether having minimal theory of mind involves representing mental states, but I don’t think much hangs on that.

      I agree with you that it’s hard to get a fix on deep theoretical differences in motivation. I think there’s a good reason for this, though. In debates where theorists like Call, Tomasello, Clayton and Emery oppose advocates of behavioural approaches like Vonk, Penn and Povinelli it used to be assumed that there’s a clear distinction between the mental and the behavioural. More recently, several people have suggested, in effect, that this distinction cannot bear the theoretical weight it’s supposed to carry. This is both because there are different notions of behaviour and different notions of the mental. If Tad Zawidzki’s project succeeds, or ours does, I think one side effect will be that key debates about theory of mind abilities are no longer cast in terms of an unexamined dualism between the behavioural and the mental.

      Let me try to put this idea another way. Suppose someone makes the conjecture that a particular group’s theory of mind abilities depend on their using a particular associationist model of behaviour. It’s easy to see how this conjecture differs from the conjecture that this group is using a minimal theory of mind. What’s hard, though, is to find a useful, theoretically motivated way of classifying the two conjectures at a more abstract level.

      I want to separate this issue of classification from another, related one. Suppose we agree that being able to belief is hard in the sense that it requires more in the way of cognitive resources like working memory and inhibitory control than many individuals have. (Of course not everyone agrees about this.) Then we should ask, Why is being able to represent belief hard in this sense? What is is about beliefs that makes representing them so hard? Here there are a number of factors. You might conjecture that it’s the fact that they’re mental states, or that it’s the nature of their contents, or that it’s the nature of the attitudes (perhaps their normative aspects, or the complexity of the causal roles in terms of which the attitudes are characterised), or that it’s the possibility of incorrectness. Or maybe its some combination of these and other factors. There are some illuminating attempts to distinguish these conjectures, for example by comparing beliefs with signs, and beliefs with desires. My sense is that we don’t yet know for sure it is about belief that makes representing beliefs so hard, and that finding out would be informative about the nature of theory of mind abilities in infants and nonhuman animals.

      You asked why to use minimal theory of mind is not only to have a theory of mind ability but also to enjoy theory of mind cognition. The short answer is: theory of mind cognition is cognition that involves representing mental states; using minimal theory of mind involves representing registration which is a mental states or something resembling one, and thereby counts as theory of mind cognition or something resembling it. (In this context, resemblance is to be understood by reference to the notions of a mental state implicit in empirical theory of mind research.) But you’re right, of course, that there are tricky issues about representation here, as there are in the case of number-related representations. And, as you and Cameron Buckner suggest, at least some of these issues have metaphysical roots. I want to come back to this later in reply to Cameron’s reply to my reply.

  7. Enoch Lambert

    One thing I’m not sure I understand is why registrations are not representational. In addition to being caused by encounters and causing behaviors, they have correctness conditions. There are many kinds of relations that can obtain between actors, objects, and locations. I would have thought that correctness conditions would distinguish the representational from the non-representational ones. What have I missed?

    Is there a bibliography available for the references in T. Zawidski’s commentary?

    • Steve Butterfill

      Hi Enoch — I wanted to reply to your question about why registrations are not representations. I agree that we shouldn’t have said outright that registrations are not representations for just the reason you give. That is, if having correctness conditions is sufficient for a state to be a representation, then registrations are representations.

      I realised after we wrote the paper that we made a mistake in asserting outright that registrations are not representations. In writing the paper we were thinking a lot about Josef Perner’s view and what would be sufficient for something to be a metarepresentation on his view. I hope we got this much right: representing registrations wouldn’t count as metarepresentation on Perner’s view. (Ian please correct me if that’s wrong.)

  8. Just a quick note about the altercentric interference criterion: as I understand it, it’s a very interesting idea that might help us figure out what nonlinguistic agents are tracking, but it’s the least likely of the four ideas offered (i.e. focusing on explanation, altercentric interference, or self-other projection, and goal-information priming) to help with the metaphysical problem of distinguishing behavior-reading from minimal ToM. For a comparison, people have really liked the self-other projection model first suggested by Heyes and further pursued by many others because it would, in principle, allow one to maximize the perceptual disparity of the learning and test situations, and so provide the strongest evidence for the user of an intervening variable. Focusing on explanation is promising too, because it shifts the explanandum out of behavior-reading’s purview. By contrast, my hunch is that behavior-readers would have much less difficulty explaining the altercentric interference effects, because they already presume that agents are routinely computing line-of-gaze of other agents. (Lurz even appeals to the idea that agents also routinely compute their own line-of-gaze to give a behavior-reading explanation of one of Povinelli’s proposed designs, and notes that doing so might be trickier than computing the line of gaze of others (since in a way you have to adopt a third-person perspective on yourself), which could even give a principled explanation of some of the subler altercentric effects.) Perhaps the Samson 2010 study already controlled for this somehow (I only gave the paper a quick read-through), but why couldn’t the behavior readers just say that the avatar’s line-of-gaze or to object(s) at another location increases the salience of that location, interfering with the subject’s ability to think about objects at other locations?

    At any rate, I don’t think any of this should impugn the interestingness of the data collected by the experiment, which everybody should take into account in designing models of social cognition. In general, I’m a little out-of-character defending the behavior-reading doctrine here–I came to bury this objection, not to raise it!–but having danced with the skeptics a bit, I’ve found that their feet are nimbler than I originally supposed…

    • These concerns about the interpretation of altercentric interference are reasonable, and there’s quite a bit of on-going work that will eventually bear on the right interpretation. For now let me just highlight three published reasons why my bet is against “attention cueing”/behaviour reading as an alternative interpretation.

      Firstly, the fact that the location the avatar is looking at is the avatar’s perspective seems to matter. Samson et al (2010) report that in cases where participants and the avatar saw the same number of discs on the wall participants responded fastest of all when they explicitly judged the avatar’s perspective, compared with when they explicitly judged their own perspective. Note that the stimuli are identical in these cases, with identical opportunities for the avatar to cue attention. Yet participants responded fastest when they used information from the “cued location” to answer a question about what the avatar saw.

      Secondly, Surtees & Apperly (2012, Child Development) had a baseline condition in which the avatar was replaced with a vertical oblong “stick” with yellow on one side. On “yellow side” trials participants judged how many discs were on the yellow side of the stick (analogous to judging how many discs could be seen by the avatar). Over trials it might be expected that participants would begin to attend preferentially to information on the yellow side of the stick, yielding effects analogous to those that might arise from preferentially attending to discs in the avatar’s line of sight. No such effects were found.

      Finally, and most compellingly in my view, altercentric interference was also observed by Kovacs et al. (2010, Science) and Van der Wel et al (2013, cognition). Each study used quite different paradigms, with time-courses and stimulus sequences that are really not compatible with altercentric interference arising from simple cueing of attention by an avatar. This does not speak directly against the attention cueing explanation of our results, but it does suggest that attention cueing can’t explain altercentric interference in general.

      Of course, I have lost bets in the past!

  9. Tad Zawidzki

    Here are the references to my commentary. Apologies for the lateness, and for mislabeling Call and Tomasello (2008) as “Tomasello and Call (2008)” in the commentary.

    Andrews K. 2007. Critter psychology: On the possibility of nonhuman animal folk psychology. In Folk Psychology Re-Assessed, ed. DD Hutto, M Ratcliffe. Dordrecht: Springer.

    Andrews K. 2008. It’s in your nature: A pluralistic folk psychology. Synthese 165(1): 13-29.

    Andrews K. 2012. Do apes read minds? Toward a new folk psychology. Cambridge, MA: MIT Press.

    Apperly IA. 2011. Mindreaders. Hove: Psychology Press.

    Apperly IA, Butterfill SA. 2009. Do humans have two systems to track beliefs and belief-like states? Psychological Review 116(4): 953-970.

    Baillargeon et al. 2010. False-belief understanding in infants. Trends in Cognitive Sciences 14(3): 110-118.

    Call J, Tomasello M. 2008. Does the chimpanzee have a theory of mind? 30 years later. Trends in Cognitive Science 12: 187-192.

    Dennett DC. 1987. The Intentional Stance. Cambridge, MA: MIT Press.

    Fodor JA. 1995. A theory of the child’s theory of mind. Cognition 44(3): 283-296.

    Gergely G, Csibra G. 2003. Teleological reasoning in infancy: the naive theory of rational action. Trends in Cognitive Sciences 7(7): 287-292.

    Hutto DD. 2008. Folk psychological narratives: the sociocultural basis of understanding reasons. Cambridge, MA: MIT Press.

    Leslie A. 1987. Pretense and representation: The origins of “theory of mind”. Psychological Review 94(4): 412-426.

    Maibom H. 2007. Social systems. Philosophical Psychology 20(5): 557-578.

    Marr D. 1982. Vision. San Francisco: W.H. Freeman and Company.

    Millikan RG. 1984. Language, Thought, and Other Biological Categories: New Foundations for Realism. Cambridge, MA: MIT Press.

    Povinelli DJ, Vonk J. 2004. We don’t need a microscope to explore the chimpanzee’s mind. Mind & Language 19(1): 1-28.

    Sterelny K. 2003. Thought in a Hostile World. Oxford: Blackwell.

    Zawidzki TW. 2011. How to interpret infant socio-cognitive competence. Review of Philosophy and Psychology 2: 483-497.

    Zawidzki TW. 2012. Unlikely allies: Embodied social cognition and the intentional stance. Phenomenology and the Cognitive Sciences, 11, 487-506.

    Zawidzki TW. 2013. Mindshaping. Cambridge, MA: MIT Press.

  10. Cameron Buckner

    Just a quick reply @Enoch: The question isn’t whether the states tracking registrations are representational punkt, but whether they are representing distal mental states or merely their proximal situational/behavioral evidence.

  11. John Michael

    Hi Steve and Ian:
    I was just reading through your excellent reply to the commentaries and stumbled upon something that troubles me:
    In sec. 3 of the reply (p.11) you write: “registration is only defeasibly linked to encountering; someone using minimal theory of mind may infer that an individual registers a object at a location even though that individual has never encountered the object at that location”
    Could you please help me to grasp how this is compatible with the definition you offer in the paper of registration as “the relation that obtains between an individual, an object and a location which obtains when the individual most recently encountered the object at that location”?
    I can see how your account would allow that a minimal mindreader might infer that an individual registers an object at a location even though the individual never in fact encountered it at that location. They would simply be mistaken. But if representing the individual as registering the object at that location is defined as representing them as having most recently encountered it there, how could one represent them as registering it at that location without representing them as having most recently encountered it there?

    In case it is not obvious: the reason why this is relevant is that a few of the infant mindreading studies where the infants’ expectations cannot easily be accounted for in terms of encounterings (e.g. the Träuble et al study where the adult does note except the object to be where she last encountered it because she manipulated the weird apparatus; the Song et al 2008 study where somebody informs the agent that the object has been moved from where she last encountered it; the forthcoming Kovacs study where the agent never encounters the object at all.)
    Thanks!

    • Steve Butterfill

      Hi John:

      Thanks for this question. The quote you use isn’t actually the way we define registration in the paper (for just the reasons you give), although it is one component of a definition. The idea is this: registration is defined by all the principles mentioning it. By design, it won’t always be possible to ascribe registrations in ways that make all of the principles true; where this happens, consistency is to be achieved by having principles mentioned later trump principles mentioned earlier (see p. 617).

      I’m sorry we didn’t make this much clearer.

  12. John Michael

    ah, the passage I quoted was from the version of the paper that had been on your website…i see now that you’ve made some changes…will have to read the latest version thoroughly, but after a first read-through of the revised passage it seems that the relevant change is that a minimal mindreader can infer that an agent registers O at L because she is sucessfully acting on it even if the minimal mindreader never saw the agent encounter O at L – but this would not change the fact that what she would be inferring is that the agent had encountered O at L and that this was the most recent encounter.
    But you apparently do not want to say this… so is it correct that on the current view there are no constraints at all on the ways in which a minimal mindreader can represent an agent as having come to register an object at a location?

  13. John Michael

    …of course I mean no constraints apart from the signature limitations…which are limitations on the kinds of contents that can be ascribed, and so of course they will also constrain the ways in which a minimal mindreader can represent an agent as having come to register O at L – e.g. the agent can’t have arrived at this registration by inferring it by appealing to an identity statement….

    But it is still non-trivial to say that there are no other constraints: the previous version, according to which representations of registrations were constrained by representations of encounterings (nothing could be represented as being registered unless it was also represented as having been recently encountered), was analogous to the case of number cognition insofar as the signature limitations on system 1 number cognition in infants and non-human animals are thought to stem from the fact that system 1 number cognition relies upon analog magnitude representations (Carey, 2009, p. 118; see also Dehaene, 1997; Gallistel, 1993). In contrast, your signature limitations are derived from reflection about the kind of representational format that you take to be unavailable to children but available to adults (i.e. linguistically encoded representations). Making registrations dependent upon encounterings would generate a positive hypothesis about the representational formats that underpin infant mindreading, which might actually explain the signature limitations (as in the case of number cognition)…

    • Steve Butterfill

      Hi John,

      The way we set up minimal theory of mind, encountering is defined in terms of the field. This means that it is possible to register that Object is at Location without having last encountered Object at that Location (and without ever having encountered it).

      At this level of detail it’s hard to provide motivation by appeal to specific findings. What seems reasonably clear is that registration must have multiple potential sources (rather than being tied exclusively to encountering) if the conjecture that infants’ can succeed on false belief tasks using only minimal theory of mind is true. In this respect, there’s a rough similarity between registration and belief: belief is not tied exclusively to perception.

      I think I’m missing something in your remarks about signature limits and the comparison with the case of number. I think we agree in understanding representational format like this: a single route might be represented by a line on a map or by a verbal description. Here we have the same content (the same route) represented in different formats. Now the conjecture about infants using minimal theory of mind is a conjecture about content, not format. The limits we identify follow from the limits of the theory, rather than limits of the format of the representations which underlie use of the theory.

      It seems to me that, as things stand, there’s much we don’t know about the mechanisms underpinning theory of mind in infants (or adults), and so about representational formats. (There’s also much we don’t know about what distinguishes the tasks that infants can pass from those they can’t, as Rubio and Geurts argue*) I agree if you’re suggesting that we lots of interesting signature limits will follow from a detailed conjecture about representational format.

      *Paula Rubio-Fernandez and Bart Geurts. How to pass the false-belief task before your fourth birthday. Psychological Science, 2012.

  14. john michael

    Hi Steve,
    Thank for this. Yeah, my main point is simply to note a difference that I had not noticed an that I think perhaps is important between the status of the sig limits in your theory and in say susan careys work on number cognition. She can motivate her signature limits by starting out with a claim about the representational format that underlies subitizing, and drawing upon independent evidence about what that kind of format allows and what it does not allow…so the distance and magnitude effects follow from limitations on analog magnitude representations in other domains..
    ok, now in the case of belief reasoning and precursors/early versions thereof, we don’t have this detailed knowledge unfortunately, so the concern is that the signature limits are arbitrary. I think more needs to be said abotu exactly why these sig limits follow from the theory and not some other sig limits….

    • Steve Butterfill

      Hi John,

      I’m puzzled by this. The signature limits we identify are consequences of the theory. That is, we can show that the theory makes systematically incorrect predictions about what will happen in certain situations (just as, in the physical case, can be done for the impetus theory). Some of these predictions are signature limits. It’s hard to see what more could be said about why these signature limits follow from the theory.

      While I think it would be great to identify predictions about limits from conjectures on aspects of implementation like representational format, it does seem that, in theory at least, the most straightforward kind of signature limit is the kind that follows from theory itself.

      What am I missing?

      • John Michael

        Hi Steve,
        Sorry for the slow reply; I have been thinking a lot about this (and, once again, I must emphasize that you have done such a great job of giving the rest of us lots to think about and thereby moving this discussion further!)…it is almost certainly me who is missing something, not you, but perhaps it is nevertheless worthwhile trying to articulate my concern as to whether the signature limits you propose actually do follow from minimal ToM as it now stands (i.e. without reference to the formats of the representationss underlying minimal ToM).
        So the issue for me is to understand why the theory enables minimal mindreaders to succeed at the Träuble and the Song et al 2008 tasks but not to succeed at tasks that involve identity or quantification.

        The reason you give why the 2 main sig limits you discuss follow from minimal ToM is this:
        “The theory makes use of objects and their relations to agents, rather than representations of objects, to predict others’ behaviours. This means that false beliefs involving quantification or identity cannot be tracked by representing registrations” (622).

        What is the relation between the agent, the object and the location? As long as registering was linked to encountering, then a natural thing to think about registration was this: when a minimal mindreader goes about predicting an agents’ actions, she can utilize a representation of the agent, the object and a location in a particular relation, namely of the object being in the agent’s field, which is basically a spatial relation (i.e. the object in front of the agent), albeit with some qualifications (no occluder, normal lighting). So even if the agent or the object has been moved, so the agent is no longer encountering the object at that location, as long as there has been no subsequent encounter, the agent will aim to restore this relation if she initiates an action on the object, i.e. she will go to that location to act on the object (even if the object is no longer there, in which case she may therefore fail in her attempt).
        OK, so that is apparently not the right way to understand it, since now registration is explicitly de-coupled from encountering. So is the minimal mindreader using a representation of a spatial relation? If so, is it a representation of a possible or a likely spatial relation?
        If a minimal mindreader can represent an agent as updating her registration of an object’s location without representing the agent as encountering the object again, and I guess without representing her as ever having encountered it in the first place, can she represent an agent as registering O at L without even being able to represent encounterings or fields?
        Thanks in advance if you have a moment to try to help me to understand this.

  15. now in the case of belief reasoning and precursors/early versions thereof, we don’t have this detailed knowledge unfortunately, so the concern is that the signature limits are arbitrary. I think more needs to be said abotu exactly why these sig limits follow from the theory and not some other sig limits

Comments are closed.

Back to Top