Radicalizing Enactivism? Not Yet…

In my Philosophy of Mind class, we just finished reading D. Hutto and E. Myin (2013), Radicalizing Enactivism, MIT Press.

One good thing about this book is the way it carefully distinguishes various “enactivist” theses.

1. The mind is embodied, embedded, enactive, extended causally; that is, the mind causally interacts with body, environment, and whatnot. True but trivial.

2. The mind is embodied, embedded, enactive, extended in the format of its representations. Plausible on reasonable construals but pretty trivial.

3. The mind is embodied etc., constitutively; that is, parts of the body, environment, action are also parts of the mind. Possibly nontrivial  (although questionable).

4. The mind is embodied etc. constitutively in a way that rules out representationalism. Highly nontrivial!!!

Hutto and Myin defend (4) but they don’t have much new to say to defend it. Their argument is two-pronged.

Prong 1: Remember Brooks’ robotics program from the 1980s and some dynamical explanations of cognitive capacities that supposedly do not appeal to representations? On that basis, Hutto and Myin say that we can explain some (simple) “cognitive” capacities without representations; therefore, they imply we might be able to explain most of cognition without representations. This is a straightforward non sequitur, whose conclusion is extremely implausible in light of the failure to date of said (allegedly) nonrepresentational explanations to scale up to any cognitive capacities of significant sophistication.

Prong 2: Representation requires a naturalistic theory of content, and existing teleosemantic theories of content (a la Millikan, Dretske, etc.) fail to do justice to representational properties of linguistic intentionality (reference, truth, and inference). This is questionable, but even if true, nothing follows about whether cognition involves representation. So Hutto and  Myin add that anything that falls short of linguistic intentionality is not genuine representation, even if it can be accounted for by naturalistic teleosemantic theories like those of Dretske and Millikan. At this point the dispute becomes merely verbal. Cognitive scientists call them representations; Hutto and Myin don’t like that. Well, is it ok if we call them schmepresentations? (Hutto and Myin also claim that nothing naturalistic can bridge the gap between representation a la Dretske and Millikan, which is not genuine representation, and genuine linguistic representation, which is highly dubious)

The most novel point that I found is Hutto and Myin’s claim that we should simply reinterpret representational cognitive science as dwelling not in representations but in meaningless “signs”–except when we come to linguistic cognition, which is genuinely contentful. Instead of teleosemantics, they call their theory teleosemiotics–but notice that while ordinary semiotics includes semantics as a part, their teleosemiotics is about contentless signs. This sounds a lot like Stich’s syntactic theory of mind from the early 1980s (From Folk Psychology to  Cognitive Science, MIT Press, 1983), minus Hutto and Myin’s  enactivism (which plays no role in prong 2 of their argument). The big difference is that Stich wrote before teleosemantics came into the picture, so his skepticism about representation was somewhat more justified. It’s hard for me to see why we should go back to Stich’s syntactic theory after all the progress we’ve made in understanding representation during the last 30 years.

The most baffling aspect of Hutto and Myin’s book is that while they reject representation  for most of cognition, they say again and again that linguistic cognition involves representation. So, by their own standards, they owe us a naturalistic account of representation. Unfortunately, by prong 2 of their argument, such an account cannot be given. One moral is that their prong 2 “proves too much”. Modus tollens is in order.

 

34 Comments

  1. KEN

    Gualtiero,
    Do you think the book has an argument for the embodiment thesis:

    the Embodiment Thesis … equates basic cognition with concrete spatio-temporally extended patterns of dynamic interaction between organisms and their environments. (p. 5).

    Why should we think that this is what basic cognition is? Is there are reason or is this some sort of stipulative defintion?

    • Gualtiero Piccinini

      Ken, thanks for your question. I think their main argument is what I call prong 1: that there are forms of explanations of cognitive capacities that involve “concrete spatio-temporally extended patterns of dynamic interaction between organisms and their environments” (without representations). But the bulk of the book is aimed at rejecting representationalism, not at supporting the Embodiment Thesis.

      • KEN

        Hi, Gualtiero,
        Do you think there is a difference between a cognitive capacity and a behavioral capacity? And, in particular, can a being engage in some behavior without deploying any cognitive capacities?

        Ken

        • Gualtiero Piccinini

          Ken, thanks for the question. ‘Behavior’ can be used very broadly, for things like plant behavior or even planetary behavior (or whatever; any physical activity can be called a ‘behavior’). In that broad sense, it is surely broader than ‘cognition’ as ‘cognition’ is usually understood. I’m sure this is not what you had in mind. Either way, I don’t think there is a sharp boundary between cognitive and noncognitive capacities. There are clear cases of cognitive capacities (playing chess, using language, solving mathematical problems, planning vacations) and clear cases of noncognitive capacities (photosynthesis, blood circulation). Concepts can be stretched in one direction or another depending on our goal in a given context, so I don’t think very much rides on where we draw the boundary in any given case. That’s why even if Brooks’ robots or some simple dynamical systems can be reasonably said to exhibit cognitive capacities without processing representations (a questionable claim, but I will grant it for the sake of the argument), nothing of great interest follows about the central cases of cognitive capacities.

          • KEN

            Well, don’t you think that if you want the Brooks case to show that there are explanations of cognitive capacities that do not have representations, then you need the case to have two features. 1) The Brooks robots have to have cognitive capacities and 2) those cognitive capacities have to lack representations.

            Maybe you can point me to a place in the text where Hutto and Myin give a reason for believing 1).

          • Gualtiero Piccinini

            I think Hutto and Myin rely on the intuition that what Brooks’ robots do is cognitive at least to some extent. I’m not interested in disputing that intuition because IMHO how we choose to describe borderline cases makes no difference to the fate of representationalism.

  2. I’ve been encouraged by Dan Ryder to reply to your post. It is worth doing so since a few reviewers (Matthen, Shaprio) have made similar negative assessments. Setting out the whole argumentative frame properly will probably require a longer paper. Your post makes me think it worth rehearsing the logic of the book since some readers have missed it (not all, I hasten to add – there have been many reviews of the book that get its logic spot on). As I am currently working complete other papers before Xmas, this will have to be a short intervention for the moment. Hopefully I can clarify a few key issues in some short posts so we don’t lose the plot. Here goes. You are right that we wanted to clarify and argue for the non-trivial, strong version of the embodiment thesis. It is one of the central ideas promoted by REC – a radical version of enactivism. How did we do so? Step 1 (not prong 1): Note that if Brook’s-style robots count as engaged in cognition then it would seem, prima facie, that there are (hence can be) forms of cognition that are not content involving. If appearances don’t deceive then this Brook’s-style robots would be an existence proof of REC. Minimally, they would establish that forms of contenteless cognition are possible. We note on p. 45 that even if this was accepted it does not deal with the scope problem. However, we go on to suggest that there are reasons to think that scope problem is not as serious as many suggest. The same basic principles of interactive cognition operative in Brook-type cases (or something near enough) may well suffice to explain other important cognitive phenomenon. We discussed how similar principles might suffice to explain the sort of cognition involved in human manual activity to illustrate the point. Unlike work by others – e.g. Chemero 2009 – our concern in the book was not to address the scope objection by showing in detail how interactive and dynamic explanations suffice to explain a range of phenomena. Why not? Because we regard making progress on the questions surrounding the possibility and need for content in explanations of cognition as much more fundamental and pivotal. Why? Because, on the one hand, if content is involved in explaining the performances of even Brook’s style robots then even those robots do not, contrary to first appearances, provide an existence proof of the strong embodiment thesis (And as we say, in that case “REC is false across the board” – p. 51). On the other hand, if the best explanations of the antics of such robots do not involve content at all, then the game for REC is seriously afoot and the scope objection is cast in a new light (We left it for others, or future work, to fill in the empirically driven parts of the arguments for REC accounts on that front. The book was always only a prolegomena providing logical frame for that work. It is philosophical at a heart). That’s why we make the information-processing challenge (Clark 2008) the central focus of the book. BTW – our strong bet remains that over time of course, cognitive science will come to realise that content-involving information-processing and representation play a far smaller role – to the extent they play any at all – in explaining cognition. That’s a wager and winning it is part of much longer and protracted research programme. Articulating the argumentative frame and motivation for that – a different sort of philosophical work – was a main point of the book. Conducting that work was never part of the book’s ambitions. I don’t see the non-sequitor you describe wrt to so-called Prong 1. What exactly doesn’t follow about the above? More on so-called Prong 2 anon.

    • Gualtiero Piccinini

      Daniel Hutto, thanks for your reply. Whether Brooks’ robots and simple dynamical models explain some marginally cognitive capacities without manipulating representation is somewhat debatable, depending on exactly what is meant by ‘cognition’ and by ‘representation’. I am not going to engage in that debate because as far as I can tell it doesn’t affect the big picture. The big picture is that on one side we have clearly noncognitive capacities such as blood circulation and on the other side we have clearly (sophisticated) cognitive capacities such as storing and retrieving caches of food in thousands of places, like many animals do. The ONLY explanation we have for clearly cognitive capacities involves the processing of representations. What Brooks’ robots do doesn’t affect this point. And if current naturalistic accounts of representation are inadequate, let’s improve them. In fact, Dan Ryder is about to publish a great book offering an excellent informational teleosemantic account of representation.

    • Gualtiero Piccinini

      One more point. Representation is a central notion deeply embodied and embedded within the scientific practices of cognitive science and cognitive neuroscience. (Did you like that?) Quine tried to rid science of intentionality, and then Stich tried to get rid cognitive science of representation. They didn’t succeed because representation is way too useful a notion. It is here to stay.

  3. Ok – onto clarifying the big ‘Prong 2′ argument. This issues are actually quite complex and this will take some time – and I can’t reply fully this morning (in Oz). I’ll do it step by step. Warm up: You make some strong and revealing claims about the indispensability of representations. The claims are not news of course. I am also under no illusion that many of today’s philosophers and cognitive scientists will agree with those claims. Of course, that doesn’t make them true. I think it helps to think a little about their status. Are they offered as analytic truths or empirical bets? I take it that they are meant as claims about the defining construct of cognitive science. So rendered they are supported by evidence that representation is so useful that as a matter of fact, it will never be let go. That opens the door the on going stream of disputed cases that I mentioned in the previous post (the protracted E-research programme that is underway). At same time the sheer strength of the claims suggests that something stronger supports them. Typically philosophical arguments of a deductive kind are invoked to establish the necessary truth of representationalism (e.g. Often along the lines that P1: The mark of the mental is aboutness; P2 Aboutness’ entails content, and so on).

    • Gualtiero Piccinini

      Thanks, Daniel. I’m not offering any analytic claims. I’m offering observations about how our science of cognition developed and works now and on that basis I’m predicting how it will continue to work in the future. Let’s remember how the dialectic went: Psychologists (including philosophers interested in the mind) appealed to representations in a somewhat loose and nonmechanistic way to explain cognition up to the early 20th Century. Then Ryle, Quine and others argued that representation was unscientific because of worries about vicious homuncular regresses and the like. Then computers came along and proved that you can have mechanistic manipulation of representations. Then people still worried that you couldn’t give a naturalistic account of semantic content. Then Stampe, Dretske, Neander, Millikan came along and gave a naturalistic account of a notion of semantic content that fits mainstream scientific practices. Like every scientific enterprise, representationalist science is unfinished business and more work is needed. But more work will continue to be done, with help from philosophers, and the scientific representationalist picture will continue to become more clear and rigorous over time. That’s my bet.

  4. The biggest problem in assessing the strong representationalist claims (esp. if cast in analytic format) is that ‘mental representation’ itself does not have determinate, unambiguous content (As Millikan once put it – ‘representation’ does not come from scripture’). The term is used quite widely in cognitive sciences in multifarious ways – ways that would conflict if we took them each to imply a sufficient criterion for what counts as a representation. Hence in making our argument in the book instead of getting into the hopeless business of linguistic policing Erik and I chose to focus on what we took to be a comparatively more solid and substantive claim – the idea that mental representations have content (and hence vehicles). We took it that this is widely regarded to be the central and common commitment of representational cognitive science. Moreover it would be significant (and not merely verbal) adjustment to the field if it turned out there could be forms of cognition that are not content–involving. We left aside the question of whether one might defend the existence of contentless representations. That is of course possible – but we took it that once evacuated of content (and hence the tag-along idea of vehicles of content) one would truly would be left with representations in name alone. Simply put, at such a point the arguments between representationalist and non–rerpesesentationalism would be a more or less a purely semantic (though there could be other interesting arguments, even here, about the contentless architecture of mind).

  5. More on the issue of whether the debate about representationalism reduces to a merely verbal dispute (First some general comments and then something specific about the theirs of Dretske and Millkan in a future post):

    Evaluating the ‘theory of mind’ or ToM debate I once said ‘loose lips sink ships, but they also keep them afloat’. The comment was directed at psychologists and other scientists who equivocally use the notion of ToM to describe the explanandum as opposed a possible explanans, in some contexts and vice versa, in other contexts. So on the first reading it cannot fail to be true that we operate with a ToM. On the second reading the claim that we operate with a theory of mind is substantive, and possibly false claim.

    Of course, by the same token, strong representationalism will be true – necessarily – under some or other reading if we opt a counterpart to first reading of the ToM claim or tolerate a similar sort of equivocation in the ToM debates in the debates about the basis of cognition. I take it that anyone hoping to advance a serious claim about the truth of represenationalist theories of cognition and the indispensable explanatory power of representations in cognitive science will want to identify properties the representations have that do the relevant explanatory work.

    As per my pervious post, Erik Myin and I latched onto the idea that a commitment to content if representationalism is to be substantive claim . We were pretty liberal in our target, allowing that “there is content wherever there are conditions of satisfaction” (Preface). Thus content need not be truth-conditional or propositional (although for many representationalist it is often taken to be so). Our discussion of maximally minimal intellectualism in Ch. 5 explicitly allows or a less demanding “kind of content that is of a different, more primitive sort than that had by propositional attitudes”.

    • Hi Dan and Eric,

      Since you’re on the line here, can you help me out in understanding the main critical argument of your book against information-based views, since some of the other commenters here take that aspect of the book to be its strongest? I take it this argument is supposed to be presented in Chapter 4, but despite having read several reviews, every time I feel like I’ve put my finger on it, on reflection it seems it must be somewhere else. I’ll try out some of the attempts I’ve made to charitably interpret it, and you can tell me where I’ve gone awry. (I’ll take Dretske as my primary critical target, since his informational view is the most sophisticated.) I take it that the real argument must happen somewhere after page 65, when you get down to “brass tacks” and start discussing the details of Dretske’s view. (Some of what follows reprises Shapiro, but hopefully I’m hewing closely to your text.)

      1. We might worry that Dretske conflates content with mere covariation. But I take it that Dretske wouldn’t bother with supposing content to be *merely* information; he has always claimed that content is covariation + selection. In other words, Dretske would be fully on board with your distinction between information-as-(mere)-covariation and information-as-content, with the latter understood as information-as-covariation-that-got-a-representation-selected-by-learning-for-behavioral-control. So this can’t be the problem with Dretske’s view. Content is not identical to information; it is rather derived from it by an agent that possesses the right kind of learning apparatus and the right kind of needs.

      2. However, if Dretske accepts this distinction, then you argue his view violates the “muggle constraint”. I have a hard time reconstructing the argument here, however. I take it this conclusion would only follow if Dretske cannot give a naturalized account of the way that learning selects representations for behavioral control on the basis of certain of their informational relations. But this is just what Dretske spent most of his effort doing in both his 1981 and his 1988 book. In the former he quite elegantly described how a concept becomes selectively responsive to a certain range of informational relations through the process of digitalization, and in the 1988 book he explained how instrumental learning can recruit representations to positions of behavioral control on the basis of their informational relations. I would think his account of digitalization and instrumental learning both pass the Muggle constraint; so what is the problem here? Both accounts should clear the “second horn of your dilemma” (p68) by systematically relating content properties to covariance properties, since digital encoding is cashed out in terms of the information carried by a signal that is not nomically nested in any other information, and it is highly psychologically plausible (and thus naturalistically acceptable) that discrimination learning allows an organism to be sensitive to some of a vehicle’s informational relations.

      There’s thus no need for Dretske to take content properties as primitives or to suppose that we need some future physics to understand them. All we need is a little elementary learning theory, already in place.

      3. Then I wondered if your argument is that Dretske takes content to be indication, and indication is somehow fundamentally different from or cannot be reduced to indication. The closest thing I find to an argument for this is that “indication is, at least, a three-place relation unlike covariance” (70). But in “Misrepresentation” (1986), Dretske pretty clearly explains why an interpreter shouldn’t be thought necessary for indication. Maybe this just illustrates that your hypothetical interpreter has got Dretske wrong. Content is still (and always) more than mere indication; it is indication + selection. (And since, again, both digitalization and reinforcement learning are naturalistically respectable, we should take this interpretation of Dretske to have addressed the challenge of the following paragraph, “Still, even if we were to allow—for the sake of argument—that it somehow brings content into the picture, we would need full details about how this occurs to be confident of its naturalistic credentials.”)

      4. Some of the points in the following sections about Millikan and evolutionary teleofunctionalism suggest that a selectionist story of the sort I’m supposing Dretske to have told can’t work unless we presuppose that there is “preexisting content to be dealt with”. But I don’t know why this is required, or find an argument to that effect. There need only be preexisting information to be selected. Dretske is a reductionist about content, but he doesn’t reduce content to mere information; he reduces content to information that is selected for behavioral control. There is no reason to suppose that content has to already exist prior to selection for this reductionist story to be coherent.

      5. If I go back a bit earlier in the discussion of Wheeler (63-64), I can try to construct another argument on your behalf; for learning to be the sort of system that Dretske thinks it is, it must literally be sensitive to information (notice, not “informational contents”, as you write, for the reasons above). But I take it there is no problem in taking learning to really be sensitive to covariation relations; this ought to simply fall out of empirical accounts of associative learning, which is taken to monitor covariation between stimuli, behavior, and rewards, and to modify production of future behavior as a result. At the very least, all such sensitivity can be translated into perfectly respectable counterfactuals; if representation R did not stand in the certain informational relation R to property P, it would not have been recruited to control movements M. As Dretske stresses in Explaining Behavior, the right content will supply the right answer to counterfactuals “Why was neural state N, rather than some other neural state N*, selected to control bodily movements M”? And the answer will be that it covaried with P during learning.

      6. If I go forward quite a bit (p79) and again generalize from the discussion of Millikan, it might be that your criticism of the Dretskean view I’ve outlined above is that it still falls short of intensionality “(with an ‘s’)”, and in so doing fails to deliver real “truth-evaluable thoughts”, which you take to be a necessary condition for content.

      You here gesture at Fodor’s argument that selectionist (historical) explanations are extensional and thus can’t distinguish co-extensional representations. But Dretske’s view was never historical in this sense; informational relations on his view have always been specified nomically, and the probabilities that matter for determining content are always counterfactual-respecting (rather than reflecting only actual historical contingencies). There’s quite a long discussion in KatFoI on how such nomic dependencies allow agents to unequivocally represent co-extensional properties, given the modal properties of laws from which the probabilistic dependencies are derived. (And laws are still nice and muggly, aren’t they?)

      The only thing in the neighborhood that still troubles Dretske are properties that aren’t just coincidentally coextensional, but are necessarily (nomically) coextensional, such that no analysis of counterfactuals will help. This is, of course, just the gavagai problem all over again. The best response Dretske offers to this challenge probably occurs at the end of Knowledge and the Flow of Information, where he notes that we can distinguish the contents of representations that bear all the same information (such as RABBIT and UNDETACHED RABBIT PARTS) but only if these concepts are compositional and we consider how their contents differently derive from their semantically-distinct parts.

      Dretske unfortunately doesn’t give us an account of how to derive compositional contents from the contents of atomic parts, but we don’t have any special reason to believe this story cannot be naturalistically respectable, given that compositional natural and formal languages do it (and as others have noted, you think they have content). For atomic concepts, Dretske bites the bullet; two atomic concepts that carry exactly the same information have the same contents. Dretske argues that to demand more for atomic concepts—to demand that we be able to distinguish primitive RABBIT and UNDETACHED RABBIT PARTS concepts where there are no semantically evaluable parts to either—is to ask more of primitive contents than we should. I have to admit that I’m pretty sympathetic to this response as a naturalist; I’m interested in supplying an informational account of content that can explicate the content ascriptions made by practicing cognitive scientists, and I have never seen a case in cognitive science or psychology where a psychologist thought it important for a subject to be able to distinguish nomically equivalent properties. (Maybe Burge is right that a lot of these can be ruled out as irrelevant alternatives.) I’m not even sure I’ve ever seen such a case in philosophy, given that we can only even formulate the gavagai problem using compositional constructions in natural language. Moreover, it’s one thing to ask for a solution to the gavagai problem when engaging in radical translation of a foreign tongue, when we already have good reason to believe that the construction could be compositional; but why should we suppose that we need to disambiguate the two different possibilities for atomic concepts? And what is wrong with just insisting that, for primitive concepts, these two are semantically identical—that there cannot be two different possible intensions for such a concept? (After all, isn’t it more natural to think of such highfalutin intensions in the case of complex concepts anyway?)

      So, at this point I’m a bit puzzled. I took this section to be the place where we would find the definitive argument against informational teleosemantics, but I don’t find one here; and indeed, it is distressing that I don’t find anywhere a discussion of digitalization or reinforcement learning, which I take to be Dretske’s answer to these challenges. Maybe there’s an argument, if it’s just the standard gavagai worry against teleosemantics, but there’s no motivation for why we should consider that argument to be decisive if we’re ultimately interested in content for cognitive science. Can you help me out?

      (This is not to say that I don’t think there are good arguments against Dretske’s view; I just don’t feel like I’ve identified your arguments, and the arguments I favor leave informational approaches looking more promising than you suppose.)

  6. Erik Myin

    Some quick comments on the “how the dialectic went”. Instead of saying that Stampe, Dretske, Neander, Millikan mechanized representation, it could be said they representationalised mechanism. The question remains whether they were successful in this. The key to the representationalizing mechanisms lies in the claim that functional isomorphisms are sufficient for content and representation – in the sense in which words in natural language are taken to carry content and to be representations. Dan and I, and others like Bas van Fraassen, say it’s not the same sense (see the first chapter in van Fraassen’s ‘Scientific Representation’). An additional argument, not yet pursued in Radicalizing Enactivism, but one which I am at this very moment writing a paper on, is that the representations in language do not get their content from isomorphism with some external realm, unlike the alleged representations in a mechanism like a calculator. Now I’m going offline to work on the paper!

    • I personally loved your book, especially for the way it draws out the ‘hard problem of content’ as you call it. Gualtiero references the failure of Stich and Quine to convince philosophers of their conclusions as if this somehow obviates the problems that drove them to their conclusions: RE does a great job showing that those problems haven’t gone away.

      Since eliminativism simply amounts to enactivism, I would like to convince you (as you know!) to abandon representations altogether, to adopt ‘just plain crazy enactivism.’ Since our brain has no real way of tracking itself in the same way it tracks environmental systems, how could it possibly track its own environmental relationships? The best it can do is tackle the problem heuristically, that is, ignore the information it cannot access, and work with what it can. What can’t it access? The causal history of its own environmental relationships, among other things. So it solves that relationship otherwise, via ‘aboutness.’

      So why would anyone think ‘aboutness’ is anything more than a radical shorthand for what we know to be an astronomically complicated natural picture? Of course ‘representations’ do real cognitive work. This is what aboutness is adapted to do. Of course ‘representations’ elude natural explanation: they turn on a system designed to troubleshoot problems in the absence of that information!

  7. Anne Jaap Jacobson

    Eric, you might look at an argument I have on your side. It’s in the second of my present presentations here on BrainsBlog. The basic idea is that the theorists you mention have failed to secure a causal role for content.

  8. Erik Myin

    Thanks, Anne, I will certainly look at it. In any case, the idea of a functioning isomorphism does indeed not secure a causal role for content, as just the isomorphism suffices for the causal functioning. A contentless isomorphism will do all the work, at a cheaper cost, than one that’s supposed to carry content.

  9. Hi Cameron – just saw your comment. There’s A LOT to unpick here concerning Dretske’s views and our analyses/argument. I take your challenge to be a fair one. I do think, however, that there may be trouble in unpacking the notion of organism-independent indication (which I assume reduces to covariance) and indication+selection. I think there is a danger of smuggling here (something Ramsey 2007 notes too). There are grounds to think that early on Dretske may have been at risk of conflating covariance and content – information and content – even before the effects of selection kick in. He claimed that the senses tell us truly about some state of affairs. Of course misinformation is not information. So there is no possibility of misrepresentation in such cases. But it might be thought that what is being assumed is that when conditions are right, when information is conveyed, content – true content is conveyed. In the 1981 book FD speaks of ‘what’ is conveyed (not the quantity of information) as de re propositional content. The 1981 book begins by telling us that information comes first and that it in no way depends on the existence of interpreting beings or systems. Perhaps the cumulative implications of these passages can be explained away. I am not trying to sell a particular reading of FD. I don’t think too much hangs on whether FD was originally thinking of covariance as some sort of content or not. Our point is that no explanatory naturalist is entitled to do so without further ado. Also, we wanted to expose that familiar talk about informational content, which is very popular today and plays a role some arguments, needs explication. How seriously and in what way are we to understand the notion of informational content? Really all we wanted to get clear about is that sans an account of how covariation constitutes a kind of content explanatory naturalists are only entitled to call on the notion of information-as-covariance (as they often officially note). That was just an initial step in a longer game – an attempt to levelling the playing field and avoid later confusion. It is a familiar refrain that ‘no serious player ever conflated covariance and content’ – then our point should be utterly uncontroversial. The terms of the discussion are set. Good. No take backs. All of our initial analysis was meant to do was to clarify exactly which tools the explanatory naturalists have to work with. Super crudely, we sought to establish that: 1. Covariance doesn’t constitute any kind of content. 2 Covariance + selection doesn’t constitute any kind of content (Neither in Dretske nor Millikan’s versions of such accounts). I’ll think about your comments further and possiblity that we may have overlooked special features of FD’s mature account. You may be right that in order to make the case in full detail we will need to sharpen up the arguments in this territory and discuss Dretske’s notions of indication and how it relates of covariance and content at greater length. So thanks for the comment.

    • Hi Dan,

      Thank you for the response–(and no take backs, I promise!).

      Only one final thing: I think Dretske was pretty clear even in KatFoI that he didn’t conflate covariance and content in the case of perception. He writes in Chapter “Sensation and Perception” that:
      “Our perceptual experience (what we ordinarily refer to as the look, sound, and feel of things) is being identified with an information-carrying structure-a structure in which information about a source is coded in analog form and made available to something like a digital converter (more of this in Part III) for cognitive utilization. This sensory structure or representation is said to be an analog encoding of incoming information because it is always information embedded in this sensory structure (embedded within a richer matrix of information) that is subjected to the digitalizing processes characteristic of the cognitive mechanisms. Until information has been extracted from this sensory structure (digitilization ), nothing corresponding to recognition, classification, identification, or judgment has occurred-nothing, that is, of any conceptual or cognitive significance.” (1981, 153)

      Isn’t he pretty clearly disavowing there that there is any content with standards of satisfaction until digitalization has taken place?

      I think part of the problem here might be that Dretske uses key terms in slightly different ways through his oeuvre. Dretske does indeed sometimes write about “informational contents” all over the place in KatFoI, but he doesn’t by “informational content” there mean “mental content”, i.e. the sort of thing with standards of satisfaction. In KatFoI, he reserves the word “meaning” (and sometimes “intensionality”) for this richer phenomenon. He later shifted to using the word “meaning” in a more permissive way–most especially around 1986, when he started adopting the Gricean sense of “natural meaning” for indication (which still is just lawful covariance). And by EB (1988), he was using “content”, “propositional content”, and “meaning” more interchangeably.

      Things are also complicated in that many of his expositions, he likes to contrast the kind of content found in natural cognitive agents with the kind of ersatz content possessed by artifacts like gas gauges, and sometimes describes the latter as “propositional content”. But he always precedes or follows such examples with an argument, resonant with your own concerns, as to why in the absence of selection this is not the same kind of propositional content had by natural cognitive systems. In other words, he invariably expresses the caveat when using such language that the kind of representational or propositional content that Dennett thinks artifacts possess was the same sort as possessed by natural cognitive agents. The crucial difference is that the representational content possessed by artifacts cannot be naturalized (because it depends upon the intentional stance of the interpreter–and so indeed smuggles something in), whereas the content of agents that can derive their own representational functions through digitalization/learning can be fully naturalized. For example, in “Reasons and Causes” (1989) he writes:

      “Note, first, that for any system S for which The Design Problem has been solved [i.e. the artifacts I was mentioning], we have some internal state or condition in S that indicates, means (in Grice’s natural sense of meaning), or represents (in, I think, one sense of this word) something about how things stand outside of S. This is, as everyone knows, a pretty anemic sort of meaning, not rich or intentionally robust enough to serve as the propositional content of a belief or a desire. Nevertheless, it does define a content, a propositional content, of sorts for S’s internal states-a content, I hasten to point out, that does not supervene on the intrinsic physical properties of the internal indicator or representation.”

      So while there is plenty of polysemy in Dretske’s ouevre regarding words like “meaning”, “content”, and even “indication”, if you disambiguate each occurrence in context I don’t think he ever systematically commits the conflation you describe. I think the more productive place to direct your critique would be at his accounts of digitalization and learning, which is where he claims to address the problems that concern you and Erik.

  10. Here is a quite different take on representationalism, one that might raise doubts about the security of its place as the foundation of future cognitive science.

    “Representationalism… is still the dominant view regarding mind and cognition. Functionalists in the philosophy of mind and cognitivists in psychology are well known, and amply criticized, examples of representationalist approaches. On the other hand, since the 1970s, antirepresentationalism has become an umbrella term for philosophers that draw their inspiration from Wittgenstein, phenomenology, classical American pragmatism, ecological psychology, and the work of Donald Davidson and Wilfrid Sellars. Richard Rorty’s Philosophy and the Mirror of Nature is the most influential book within this tradition (Rorty, 1979). However, with the exception of J. J. Gibson’s groundbreaking work on affordances, until the crystallization of enactivism, anti-representationalism mainly remained a negative doctrine. Enactivism has finally put forward a positive alternative to representationalism to understand mindfulness, cognition, perception and agency. We applaud this move and greet the growing heterodox consensus that enactivist thinkers are managing to achieve around their approach.”

    Manuel Heras-Escribano, Jason Noble and Manuel de Pinedo. “Enactivism, action and normativity: A Wittgensteinian analysis”. Adaptive Behavior 1–14

    The links to Wittgenstein in this debate are not arbitrary. it is important to realise that Millikan’s key agenda item – to provide a disposition-indepdendent account of semantic rules in terms of truth-conditional ‘ picturing’ – was an attempt to deal with philosophical concerns, to address issues Sellars identified (issues that he inherited from Wittgenstein). Those who prefer to forget this sort of intellectual history are not only bound to encounter the old problems from the past, they are likely to misconstrue what the original motivation and goal was in developing Millikan’s theory of content.

  11. The tactics of so-called Prong 2: Since, as we have seen, teleosemantics is widely seen as representationalism’s ‘best bet’ for naturalising the kind of content that is putatively involved basic cognition (and I certainly believe that is true), a good test case for deciding if basic cognition really needs for involves content is to see if the sorts of accounts offered by Dretske and Millikan can deliver the goods on this front. In different ways both Dretske and Millkan tried to show how truth-conditional propositional content could be naturalised in terms of informational covariances and biological functions of organisms to respond to them. Let’s agree that if those theories worked then they could clearly serve the needs of representationalism. Did they succeed? Possible answers.
    Yes. Completely.
    No quite – but partially. They don’t work to secure truth-conditional propositional content but they do secure some other kind of content.
    No really. They don’t secure any kind correctness conditions as such but they do explain a special kind of involvement between organisms and parts of the environment.
    Erik and I think only 3 is correct. We also think the option on choses here is significant since it allows to start to develop an account of Intentionality the doesn’t entail content. We (or at least I) think that much (not all) of Millikan’s apparatus can be put to new use to inform the developing enactive and ecological theories of cognition. (That is not such a stretch by the way – Millikan is very attracted to Gibsonian approaches and often discussed them). So we see our alternative – teleosemiotics – as a non–representational rendering and refinement of the Millikan’s theoretical framework; we put it different use. So I don’t think a decision to go with 1, 2 or 3 is a merely ‘verbal’ matter. For example, Cameron Buckner is right there may be a way to salvage 1 after all. Surely if that is right then we are wrong about the sorts of properties involved in cognition. Surely that is not trivial.

  12. Is having a naturalised theory of content necessary for representationalism? Not at all. Erik and I are frequently accused of saying that – but we never do. If fact we don’t even try to provide a knock-down argument against representationalism in the book. The claims of the book have a particular context – as the preface says, to promote the possibility that the roots of cognition do not involve content. Essentially we introduce what we call the ‘hard problem of content’ to make a conditional point. If as we argue we currently lack a developed theory of content then there is no argument to show that representationalism is ultimately explanatorily superior to enactivism. The objection against enactivism based on the information processing challenge, for example, are defanged. If so there is no current way to defeat or rule out enactivism based on such arguments. Hence we think E approaches should be actively developed and pursued as a live option in the cognitive sciences – not swept under the rug by rhetoric about what we already know to be the essential nature of cognition (BTW – I am glad to hear that you are not leaning on such claims, Gualtiero Piccinini). One is just way to keep representationalism a primitive posit and ditch the ambition to explain it naturalistically. Indeed that seems to be a strategy that several people are not attracted to. Simply make rerpesentaioanlism a primitive assumption of cognitive science. So, like Burge, one could argue that the assumption earns its explanatory keep by the successes of the cognitive sciences and hence doesn’t need naturalising. We begin to examine that possibility in Ch. 6 in the book. It would take a long time to evaluate this claim a since the cognitive sciences are (in our view) still developing. But that doesn’t trouble us so long as non–representational approaches continue to rack up successes the ground may shift considerably over time – we predict completely but work has to be done to show that. So I don’t see the rationale for making strong bets in favour of representationalism based on inductions about perceived past successes. Another move is to go fully fictionalists – and deny that the explanatory power provided by invoking content-talk at this level does not imply the existence of content. Hence there is no need, ever, to provide a theory of content. I don’t think, in the end, this move will work (Sprevak raises some nice worries about it). Still, it is a live option to be explored. I hope this makes clearer our intended polemic use of the ‘hard problem’ argument.

  13. er…should read “In other words, he invariably expresses the caveat when using such language that the kind of representational or propositional content that Dennett thinks artifacts possess *IS NOT* the same sort as possessed by natural cognitive agents. “

  14. Cameron – what, exactly – do you think Dretske thinks is ‘extracted’ in the first step? Altough, as I said it may involve a longer analysis to clarify whether Dretske’s account can be made to work and where exactly the confusion may creep in, I think parer of the trouble comes from confusing natural meaning with any kind of meaning or content – however anemic. Look here: “if S (sign, signal) by being a, indicates or means that O is A, then S (or, more precisely, S’s being a) carries information that O is A. What an event or condition … indicates or means about another situation is the information it carries about that other situation (Dretske 1988, p. 59). But if the information is just non-contentful covariance (no take backs) then what are we to make of this notion of indication. This is where one must guard against smuggling. If we are strict about ‘internal indicators’ being non-contentful then it seems we are back to the idea the content only comes on the scene when internal responses to covariant information get selected for – aka ‘acquire a biological function’. As per “If we suppose that, through selection, an internal indicator acquired a biological function, the function to indicate something about the animal’s surroundings, then we can say that this internal structure represents” (Dretske 1988, p. 94). Then were are back on the objections that covariance + selection don’t entail the involvement of contents. So I guess while I haven’t given you a full dress analysis of what Dretske has to say about digitalisation (though see Hutto The Presence of Mind 1999 ch. 2
    for a fuller discussion of Dretske’s views) I am failing to see how contentless covariance + contentless indication helps the representationalist cause here. Finally, the above quote from Dretske has him gesturing at a content that is other and weaker than propositional content. So what’s that supposed to be? Perhaps a non-standard content with different kinds of satisfaction conditions other than truth? Or it is just covaration and hence contentless (no take backs).

  15. Here’s a blast from the past – from my 1999 book. One reason we did not get into all this fine detail about Dretske on digitalisation and learning is that is not clear how it could help solve the fundamental issues about content and where it comes into the story that we were discussing. “Before I critically assess Dretske’s teleofunctional proposal, it is important to highlight another distinction he draws between analogue and digital forms of information. He writes:

    …a signal (structure, event, state) carries the information that s is F in digital form if and only if the signal carries no additional information about s, no information that is not already nested in s’s being F. If the signal does carry additional information about s, information that is not nested in s’s being F, then I shall say that the signal carries this information in analog form (Dretske 1981: 137).

    This distinction is necessary if we are to understand a signal’s property-specific character which enables it to serve a unique purpose in a control mechanism. Information in analogue form is unsuitable to serve a control function because it is too diffuse. For example, it is because sensory information is received in analogue form that Dretske claims that, “Typically, the sensory systems overload the information-handling capacity of our cognitive mechanisms so that not all that is given to us in perception can be digested” (Dretske 1981: 148). Consequently, “Until information is lost … an information-processing system….has failed to classify or categorise, failed to generalise, failed to recognise the input as being an instance (token) of a more general type” (Dretske 1981: 141, cf. Jacob 1997: 71). The process by which information is lost and through which a signal’s information is selected or specified, is known as digitalisation.”

  16. More from 1999: “Akins’ neurophysiological description of the way in which information is processed in hunting bats provides a vivid illustration of this kind of information loss. She writes:

    To say that a (sensory) neuron is informationally ‘specific’ is to say that when it processes an incoming signal there is information loss; because the neuron responds to only particular aspects of a complex stimulus, its signal contains information about only those specific properties. For example, think of the response properties of neurons in the FM/FM areas of bat auditory cortex. These neurons will fire whenever there is a pair of stimuli with a distinct profile – two stimuli separated by a certain time delay with a particular frequency ‘mismatch’ between a fundamental tone and its harmonic, each stimulus failing within a certain amplitude range….the response of an FM neuron does not tell us, say, how many harmonics each signal contained, or their pattern of frequency modulation, or the amplitude of the signals except a certain point in the signal sweep and so on. Much information about the acoustic properties of the original stimuli….has simply been lost (Akins 1993: 149).”

  17. More from 1999: “Dretske is attracted to the idea that some information is digital for several reasons. Firstly, he thinks it is digitalisation that enables a system to represent a specific state of affairs as being of this kind rather than of that kind. This is important because it explains why he thinks that all representations must indicate just one thing in order to perform a specific control function (see McLaughlin 1991: 161).”

  18. More still: “Furthermore, the property-specific character of digital information promises to provide answers to some difficult questions concerning our ability to learn. For example, if information is property-specific it can be nested in a signal. Signals can nest information either analytically or nomically. Thus Dretske writes:

    So, for example, I will say that the information s is a rectangle (or not a circle, or either a square or circle) is analytically nested in s’s being a square. On the other hand the fact that I weigh over 160 pounds is nomically nested (if it is nested at all) in the reading of my bathroom scale (Dretske 1981: 71). “

  19. Yet more: “Therefore, “…we are sometimes in a position to see that (hence, know that) s if F without being able to tell whether s is G despite the fact that every F is G” (Dretske 1981: 74). In this light, when a system recognises such connections we can say it has learned something (perhaps something important) about its environment. Finally, property-specificity might also account for a rudimentary form of opacity which is a hallmark of semantic content. It may explain the, “…fine–grandness that is characteristic of intentional systems” (Dretske 1988: 75, Jacob 1997: 75).” Of course, note, that won’t make it into semantic content.

  20. Finally from 1999 – and this is important: “For all its virtues there is a lethal chink in the armour of Dretske’s account. It is this: there is no principled way to uniquely specify an indicator’s control function which is compatible with the idea that the property-specific character of content rests on a digital informational base. It is best to consider this problem in stages. First of all, recall that the digital information carried by a particular signal can be nested such that a signal S which carries information about Fs will also carries information about Gs. Given this, it might be asked: How can it be claimed that some organism, O, has exploited a pre-existing relation of indication between S and F, such that its indicators serve to detect Fs, rather than saying, on similar grounds, that they function to detect Gs, or Fs and Gs? (cf. McLaughlin 1991: 158, 164–165). To the extent that Dretske rests his teleofunctional account on an information-theoretic account of indication his natural indicators will be infected with at least this level of functional indeterminacy. To prevent confusion with the problem of misrepresentation, as characterised by disjunction, Jacob helpfully labels this nesting problem: the transitivity problem.

    Even though the problem of transitivity only applies to cases of analytically or nomically nested information it reveals the structural instability that lies at the base of Dretske’s account. A tension emerges between his information-theoretic notion of indication and his teleofunctional notion of representation.” – Actually in the book we go on to discuss this tension and is one of the reasons, in the end, we prefer Millikan’s account over Dretske’s. Or rather, once on puts greater weight on the biological function as fixing content then on can concentrate one’s efforts there. Basically I can see in all this how digitalisation and learning, even if they were – for the sake of argument – to form part of the story of basic cognition, secures any kind of content per se.

  21. Hi Dan,

    Thank you again for the clarifications; this is again very helpful, and I apologize that I didn’t realize the arguments in Chapter 4 of RE were meant to allude to your 1999 book, but I can now see that the final footnote of this chapter mentions it.

    I might have to read the Jacob to proceed responsibly with this discussion (which I confess I do not have on hand), but I wanted to ask one more clarificatory question to be sure I understand. On Dretske’s proprietary form of the analog/digital distinction, the information that a signal carries in digital form is the information it carries that is not nested in any other information carried by the signal (I take it that’s what you meant above). So you’re right that if a signal digitally encodes the information that _s is F_ , it can also carry the information that _s is G_ in analogue form, so long as G is analytically or nomically related to F such that all Gs are Fs.

    So, you ask, how can it be that a representation R digitally encoding the information _s is F_ could have the semantic content that _s is F_ without also having the semantic content that _s is G_ that is nested in the information _s is F_? Dretske does think we can say this when F and G are not mutually nested. His argument here (KatFoI, 176-178) has a sort of pragmatic flavor; if contents are supposed to be unique, then information digitally encoded by the representation is the only eligible candidate. Is the objection that this proposal needs more justification, or is there something actually suspicious (i.e. non-muggly) about it? I would think the naturalist could say that if this proposal gets the right answers across all the relevant cases, and doesn’t require us to appeal to anything magic, then it’s done all we could ask of it.

    On the other hand, if F and G are mutually nested (as they are in the MacLaughin 1991 you cite) then we are in the kind of gavagai case I discussed above, which Dretske has already conceded cannot be mitigated for non-compositional concepts. But again, that doesn’t seem so terrible to me, since I don’t know of any case in cognitive science where researchers needed a theory of content that could distinguish metaphysically equivalent properties for primitive concepts.

    I really do appreciate your patience! I’m not pursuing this just to be a stickler, but rather because I’m working on several things pertaining to Dretske’s approach at the moment, and I want to be sure I represent your views accurately.

Comments are closed.

Back to Top