Was Psychosemantics a Failure?

I recently had a conversation with three self-identified Rutgers people (two Rutgers faculty plus a senior philosopher who visited Rutgers in the early 1990s) who claimed that at Rutgers it is accepted wisdom that psychosemantics was a failure.  No one ever properly solved the disjunction problem let alone naturalize semantic content.

I always thought that while the psychosemantic theories of Dretske, Millikan, Fodor et al didn’t solve all the problems and surely deserve further development, they made important progress towards naturalizing semantic content.

But when I suggested that the glass may be seen as half full rather than half empty, the three Rutgers people acted as though I was hopelessly naive and had missed the radical way in which psychosemantics had failed.  At the same time, these Rutgers people could not explain in detail off the top of their head why psychosemantics was such an abject failure.  So until proven otherwise, I remain convinced that there is something right about psychosemantics.

Anyone cares to chime in?

108 Comments

  1. Interesting post, Gualtiero. I’m pulled in two directions on this question.

    One is to just agree with the unidentified Rutgers peeps: all attempts to naturalize intentionality utterly fail, no naturalistic set of relations gives you anything like determinate, non-disjunctive content or representations of intentional inexistents, etc etc.

    The other direction I’m inclined toward is that being a failure is a ubiquitous feature of philosophical theories, so the psychosemantics projects aren’t in *especially* bad shape.

  2. Frances Egan

    Gualtiero — I assume that I am one of the ‘Rutgers people’ you mention. We did have a lengthy conversation about naturalistic semantics, but I don’t recall any claim about ‘the accepted wisdom at Rutgers.’ For all I know, Jerry Fodor still thinks that his asymmetric dependency account works just fine. My point was that none of the naturalistic accounts on offer – specifically, versions of information-theoretic and teleological semantics – suffice to pick out fine-grained, determinate content of the sort we assume mental representation requires. For example, the candidate naturalistic relations hold not only between mental states and cows, but also between mental states and the various Quinean alternatives (cow stages, undetached cow parts, etc.). This is not an original point. I’m not sure where I come down on the glass half full/half empty issue – it’s not exactly a philosophical notion – but I don’t think that minor tweaks to any of the proposed accounts will solve the problem.

  3. Tom Polger

    I have heard this from people in Maryland and Cincinnati, as well. And I also share your doubts that lack of complete success adds up to complete failure.

  4. gualtiero, i’m glad you bring up the issue of psychosemantics. i think psychosemantics is very important and work on it a lot myself but unfortunately no one seems to care about it anymore. (just because it is hard doesn’t mean we shouldn’t work on it!)

    i just wanted to mention a few places where folks actually take the bleak view in print. in recent work (e. g. ‘objective mind and the objectivity of minds’, ppr) johnston sometimes seems to be giving an argument from failure against reductive psychosemantics. in two recent papers (‘phenomenal intentionalities’ and ‘giving dualism its due’) lycan speaks of how ‘reductive psychosemantics is in terrible shape and not getting better’ and of the ‘dismal history of failure’. indeed he says he thinks this is the best argument for dualism and that he will be discussing this in a ‘subsequent paper’ – which i very much look forward to. finally, the phenomenal intentionality folks often invoke the failure of psychosemantics (e. g. Horgan et. al use this in their content determinacy argument for ‘cognitive phenomenology’). byrne and tye (‘qualia ain’t in the head’) mention the history of failure but take a ‘fingers-crossed’ view.

    my own view is that the argument from failure by itself is not all that strong, although it is something and not nothing. still i do think reductive psychosemantics fails. i think this in large part because i think a kind of internalism applies to sensory intentionality, and that this rules out a very wide class of psychosemantic theories (fodor, millikan, etc.). (as you know – since you did a helpful blog post on some early work of mine on this – which i regret to say i missed until long after it appeared.)

  5. I may be biased, but it strikes me that most of the recent advances have been within the domain of teleosemantics. This may explain why the attitude at Rutgers has been less hopeful than elsewhere, given that teleosemantics is anathema in that department.

  6. Daniel Weiskopf

    I think that psychosemantics in all of its forms is probably not going to achieve its stated aim, understood as giving non-semantic metaphysically sufficient conditions for a state’s having the ‘appropriate’ meanings or semantic contents. (Whatever ‘appropriate’ means here.) I agree with Frances Egan’s comment above that the reasons for this are basically that Quine was right and the gavagai problem is absolutely pervasive, and probably unsolvable with the permissible resources. The only way to make progress that I know of involves making really strong, arguably non-naturalistic, assumptions about the kinds of properties that can enter into content-making relations.

    For the record, I first heard this argument from my old mentor (and Rutgers Ph.D.) Gary Gates, back at Brown. It’s in his paper ‘The Price of Information’, and also in his dissertation, which is what Fodor was responding to in Ch. 3 of ‘The Elm and the Expert’. Gary’s paper makes a good case that Fodor’s solution and most plausible variants just won’t work.

    It’s fitting in retrospect that Quine delivers the final blow to the informational semantics project, since it was originally inspired by Quine’s notion of stimulus meaning (and also by Skinner). But it only takes Quine half-seriously. This just shows the perils of trying to follow Quineian premises only part of the way to Quine’s own conclusions.

  7. For those that think informational approaches are doomed, what is the reasonable alternative?

    Overall, I think the work is quite good, given the empirically impoverished state in which they have been working. The project won’t be “finished” until the neuroscientists and psychologists provide more relevant data to fill in the details. These are details you won’t be able to conjure up from the armchair. That doesn’t mean the ideas are wrong, just incomplete. Information is necessary, not sufficient, for semantic content.

    Let’s consider the gavagai problem (rabbit versus undetached rabbit parts) wrt perception. It seems this problem is partly solved by our visual system. Vision builds in certain assumptions and defeasible heuristics about the world and how it is carved up into objects to help us settle on a single interpretation of visual stimuli. E.g., when I see a copper-ovoid I see a single circular disc turned at an angle, even though it is consistent with an ovoid object facing me directly and lots of other situations.

    Also note I do not see it as two undetached halves. If our visual system employs an independent object indexing/tracking system, and we could empirically monitor this system and its treatment of the penny (is it saying there is one thing there, or two things?), then we’d be on our way to a solution. (Idea of such an indexing system is from Pylyshyn I think). Our visual system could even have a primitive “counting” system that tells us, for small numbers of moving objects, how many there are. This could help it allocate attentional resources if looking for a rabbit that trips.

    This isn’t a crazy idea. We know that brains care about individual objects. Consider simultanagnosia, the disorder in which subjects see literally one object at a time. Simultanagnosia provides an empirical window into the visual system’s assumptions about what constitutes an object (a hubcap, or the whole car, or the traffic?). It seems the visual system cares deeply about object-hood (and the Gestalt psychologists saw this of course).

    Don’t throw out the baby with the bathwater. Just because you can’t figure out the weird cases doesn’t mean the basic ideas are wrong;  more data are clearly needed to provide the catalyst for conceptual innovation.

    If you want to see how useful it is to think of real nervous systems in informational terms, pick up a neuroscience journal and read an article by an experimentalist about sensory coding. Such work is not motivated by philosophers, but aims to find useful ways to capture how brains represent the world. What do we find? Information theory and other ‘indicator’ type measures to quantify how accurately brains represent things in the world. The stuff is likely indispensable. Not sufficient, but necessary.

  8. Brendan

    Gualtiero, I see things as less half full vs. half empty, and more a matter of moving goal posts.

    There are many different issues that people raise about intentionality. One might think that there are some who insist that accounts of representation must do certain explanatory work, and are pessimistic, and others that think that more modest (and worthwhile) explanatory goals are within reach.

    For my part, I side (prima facie) with Eric. I don’t see what the reasonable alternative is to information theoretic approaches (Usher 2001 and Eliasmith 2006 are promising, I think, though somewhat unknown). But that is because I am happy to move the goal posts, though I don’t mean by ignoring that misrepresentation and the disjunction problem are important. Rather, I am inclined to give up the (often implicit) idea that the represents relation is like the lexical reference relation, or that we have identical concepts.

  9. Ken Aizawa

    Hi, Gualtiero,

    I think I was one of the three “Rutgers people”, having attended an NEH Summer Institute at Rutgers in 1993.

    I remember the discussion more in the way Frankie does. There is this core problem for the informational and teleosemantic approaches to naturalized content: the informational/teleological approaches don’t cut things as finely as do semantic concepts. I don’t recall saying that the project is hopeless, so much as that, as Frankie says, minor tweaks will not fix matters. And, as far as I can tell, there have not been any major moves since 1993.

    Maybe I was the one that suggested this was something of a consensus view. Maybe that is something of an overstatement. Maybe it was instead just a widely held view.

    My sense was that you seemed rather dismissive of this problem of informational specificity.

    I remember that at the NEH Institute Chalmers did one of his polls and asked something like “Which approach to original content do you think is most promising?” The most popular answer was “Searle’s”. I think the large contingent of Berkeley PhDs might have had something to do with that answer.

  10. Nick Georgalis

    While informational states may be a necessary condition for representational states, as Eric Thompson in an earlier post claimed, there are important differences. Allowing that informational states, as they have been conceived by theories of psychosemantics, are necessary but not sufficient for representation is, however, to accept the failure of such theories.
    One central feature of a representation that distinguishes it from information is that qua representation it represents a particular thing, feature, event, or state of affairs. This is evident when we consider our own, human, representational states. There are compelling arguments, some of which have been referenced in earlier posts, which show informational/causal/ teleological accounts are in principle insufficient to account for such a particularity requirement—excellent reason to abandon them, further “epicycles” being of no avail.
    (Unpaid Advertisement:) My “Representation and the First-Person Perspective”, Synthese 2006, offers such a critique of the aforementioned theories of representation. It also provides a positive theory of representation together with a theory of intentionality; the latter makes a substantial addition to Searle’s 1983 theory. The Synthese article is an abridged version of chapters 1 & 5 of my book, The Primacy of the Subjective. Chapter 8, not included in my Synthese paper, argues that my theory of intentionality and representation can provide for determinate meaning and reference and offers detailed criticisms of Quine’s arguments to the contrary.

  11. I’m a bit surprised that the gavagai worries are what’s causing some of you to give up on the psychosemantics project. It seems to me that a modicum of realist metaphysics plus a teleosemantic approach can solve this problem. Using Eric’s example, visual indexing for (multiple) object tracking: yes, this indexing system carries information both about objects and undetached object parts. But it’s an object tracking system because it’s *supposed* to track objects – it’s been selected for tracking objects, not undetached object parts. And we can say this because the most natural causal-explanatory account of the selection of this system mentions objects, not undetached object parts. Just like the bacterium’s magnetosome is selected for orienting towards regions of higher oxygen concentration, not undetached parts of regions of higher oxygen concentration; and electrons are attracted by protons, not undetached parts of protons.

    Now, you *could* accuse me of begging the question in assuming that whole-object causal explanation is more natural in each of these cases. (Same with grue, same with disjunctive properties, etc.) OK. But if you push this line far enough to really challenge teleosemantics, you’ll go all the way to a global anti-realism. And if global anti-realism is correct – there are no more or less natural ways of carving up the world for the purposes of causal explanation – then it’s no wonder psychosemantics fails, because there’s nothing objective out there for mental representations to refer to. (Besides, it is surely not up to the program in naturalized semantics to solve the realism-antirealism debate.) You can play in that sandbox if you want to, but I’m staying in this one with my modicum of realist metaphysics.

    While informational psychosemantics may have foundered (though I agree with Brendan that Usher and Eliasmith are worth looking at), teleosemantics is happily chugging away, with new work by Millikan, Papineau, Neander, Matthen, Shea, and others – I encourage you all to take a look at some of this stuff, maybe it will make you feel better. 🙂 I always thought it was Swampman that turned people off teleosemantics, and Fodor’s misplaced indeterminacy worries. These (related) gavagai worries came out of left field, from my perspective – thanks, Frances, and thanks Dan, for pointing out the Gates paper!

  12. Josh Weisberg

    Very interesting post!

    Eric–I’m not sure the visual system solves Quine’s problem–how does such a system disambiguate between rabbits and collections of undetached rabbit parts/stages of rabbithood/etc.? These all seem more fine-grained than anything the visual system provides.

    If naturalized teleosemantics fails, what follows, though? One thing might be non-naturalism about mental content, as Adam notes. Another might be a move away from “industrial strength realism” about mental content–a move to a more instrumentalist or interpretationalist approach (a la Dennett?). Or perhaps Churchland’s “state space” approach might have some merit.

    Maybe a workable natural notion of content does not need to solve the disjunction problem. Or maybe there is no content–there’s just some rather vague, context-dependent but still useful (folk) theory which posits “schmontent.” Schmontent does not have the precise individuation conditions of content, but it allows us to (usually? occasionally?) predict and explain each others’ behavior.

  13. Eric Thomson

    In response to:
    Allowing that informational states, as they have been conceived by
    theories of psychosemantics, are necessary but not sufficient for
    representation is, however, to accept the failure of such theories.

    This seems false. Nobody has ever said that informational states are sufficient. Everyone adds a great deal of additional apparatus; otherwise they allow in thinking thermostats and such.

    Information-based approaches were rejected outright for a long time precisely because of concerns with ‘thermometer theories of concepts’ (Sellars’ phrase) where pure covariation equals semantic content. Dretske came along and said, “Wait a minute: just because information isn’t sufficient, that doesn’t mean it isn’t necessary. Let’s see what other ingredients we can add to this informational core to get more interesting semantic contents. Obviously brains absorb and use information to get about in the world. Let’s see how far we can get when we supplement information with additional conceptual resources from biology, psychology, and philosophy.”  And out comes Knowledge and the Flow of Information.

    I haven’t seen anything in this thread to make me think he was on the wrong track, or that my original post was wrong-headed. The Bates article is good enough, but my initial comment applies to it just fine, it seems. Another philosopher playing devil’s advocate, offering nothing particularly useful as an alternative, and ending up rejecting something that is clearly important to content fixation even if it isn’t the whole story.

    Philosophers are weird. They tend to reject things whole cloth rather than use the obvious good ideas and supplement them. They end up with an anemic view because it is missing the good bit of what they rejected. E.g., Churchland’s semantic theory is interesting enough, but will be even better when married to a more Dretskian approach (and it already is an informational approach even though Churchland doesn’t realize it –the information carried is about metric relations, not individuals–that’s how you Dretske a Churchland).

  14. Eric Thomson

    Josh asks:
    Eric–I’m not sure the visual system solves Quine’s problem–how does
    such a system disambiguate between rabbits and collections of
    undetached rabbit parts/stages of rabbithood/etc.? These all seem more
    fine-grained than anything the visual system provides.

    I’m not sure it does, either. More on that below, but my real main point was that I don’t trust my intuitions about this matter given our radically empirically impoverished state in neuroscience/psychology. Dretske has a great core idea, but because he was working in the same impoverished state, is also limited to his imagination to come up with possible solutions. That he might not have solved all possible problems doesn’t bother me much, because the core ideas are clearly so helpful. And I’m not saying he failed: some of his attempts to come to grips with these problems are extremely creative and interesting, and I’m not sure that he was wrong (e.g., his article reasons and causes is brilliant, his book knowledge and the flow of information is truly a treasure trove, so many amazing ideas hidden in footnotes and seeming side routes).

    In terms of the visual system concern specifically, I have a few thoughts.

    1. If someone swapped your perception of a rabbit for a perception of a bunch of undetached rabbit parts, would you be able to tell? If not, then it isn’t clear these distinctions matter for conscious perceptual contents. We’d end up with an equivalance class or something. It’s when we start to individuate things conceptually that this problem arises. So perhaps the analysis works really well for perception, but more is needed for concepts. Of course, this is basically what Dretske already said in KFI (digital v analog contents, conceptual complexity measures to differentiate coextensional contents, etc).

    2. See what Dan Ryder said above.

    3. On the hypothesis that there is an indexing system that tracks individual objects. As Dan suggested, we could actually pull an informational analysis of that. E.g., this tracking node only activates when there is a single object present, and attaches/detaches from particular objects. Over time, we see that it only reliably carries the information that there is a single object in the world that it is interested in in tracking. This happens to conform to what we see in simultanagnosia (e.g., its “objects” are similar to the simultanagnosiacs “objects”), and to our perceptual intuitions, and to people’s judgments about how many objects are in a scene, and tracks all other measures of number of objects that the visual system has ultimately assigned to the scene.  When we show movies of rabbit parts coming together, pulling apart, we see that two such nodes are activated when apart, one node when together, and this tracks the psychophysics, etc..

    In practice, we don’t face Quine’s problem. We can ask, in English, how
    many objects they perceive in the scene when they say ‘rabbit’. We can correlate
    these judgments with neuronal and other psychological variables, and build hypotheses about what is going on in there. I’m talking about a defeasible empirical hypothesis, with multiple lines of relevant research that could help refine, refute, it. And that’s precisely what the philosophers need more of. Less analyzing, more hypothesizing. (My hunch is that many readers here are sympathetic to this view, so I am being a bit brusque in my language).

    I think both 1 and 3 are promising, but not sure if they are consistent. I believe they are.
     

  15. kenneth aizawa

    Dan,

    I don’t think it’s the example of gavagai per se that is the problem.  That’s just a kind of touchstone for a lot of problems of specificity.

    The objection to the “suppposed to track” line, as I understand it, is the following.  So, let’s say the frog’s eye represents fly, rather than moving black dot, because the frog’s eye is supposed to represent flies, since that is what is nutritious for the frog.  The objection, however, is that there remains the perfectly naturalistic account according to which the frog’s eye gets the frog to fly-eating by representing moving black dots.  It’s because snapping at moving black dots gets you flies lots of the time that it is worthwhile snapping at black dots.

    Or maybe another less freighted example will work better.  Maybe the primate visual system doesn’t represent apples or fruit or nutritious; instead they represent red roundish things.  Eating red roundish things increases fitness since eating the red roundish things is nomologically correlated with being an apple, a fruit, and nutritious.  Maybe this example will work better, since there has been at least a plausible scientific case that trichromatic color vision evolved in primates in support of frugivory.  (Actually apples is probably the wrong fruit, as perhaps are other details, but you get the point.)

    It seems to me that if you buy this idea that perceptual systems give you what they are “supposed to”, then you are buying in to the Gibsonian idea that what you perceive is affordances.  But, the rival to this view is that you don’t perceive affordances, instead, you sometimes (often?, typically?) perceive only things that are correlated with affordances, such as shapes and colors.

  16. the positive theory there is simple: “the brain does it”

    I don’t think Searle has ever gotten much more elaborate than that, but maybe I just haven’t been following his output closely enough.

  17. kenneth aizawa

    I might add that I don’t see how adding principles about whole objects being better explainers helps.  Moving black ball is as much a whole object as is a fly, right?

  18. David Pereplyotchik

    Great discussion!

    Here’s a passage from Godfrey-Smith’s “On Folk Psychology and Mental Representation,” in _Representations in Mind_, Clapin, Staines, and Slezak (eds.)

    “In the 1980s the problem of giving a naturalistic theory of mental content beckoned young philosophers like myself; this looked like a philosophical problem that was both fundamental and solvable. … Roughly twenty years on, how has the project fared? With some sadness and much caution, I suggest that things have not gone well for the Dretske-Fodor program. I doubt that we will ever see a satisfactory version of the kind of theory that Fodor’s Psychosemantics and Dretske’s Explaining Behavior tried to develop. Despite this, I do think we have learned a lot from the development of this literature. Some good partial answers may have been given to important questions–but not the exact questions that Dretske and Fodor were trying to answer. So I think it is time to start looking at different approaches to the network of questions surrounding belief and representation. This rethinking will involve looking again at some of the ideas of the nay sayers of the 1980s, like Dennett and Stich, but looking further afield as well.”

    I’m very sympathetic to what Godfrey-Smith says here, and I’m delighted to learn that many others are too. (The stuff about following Stich and Dennett will, of course, be controversial.)

    I am also sympathetic to the Quinean objections raised by several people. But I think there are other, related problems.

    The strategy of Fodor-Dretske approach is to *first* fix the content of a representational state, by reference to what worldly thing it correlates with, and only *then* to use that content-assignment in explaining the state’s role in cognition and behavior. This strikes me as being the wrong way to proceed.

    Dennett pointed out, as early as _Content and Consciousness_ (ch. IV, pp. 76-8 in my edition) that a neural event’s being reliably elicited by, say, food is not sufficient for that neural event’s being a representation of food. In order to be *that*, it must, in addition, cause the right sorts of representations downstream and, eventually, behaviors that are, in some sense, appropriate specifically to food (for that creature). Dennett is, no doubt, following Sellars on this point, for whom language-language and language-exit transitions were as important as language-entries in specifying the content of a mental state–even a perceptual state.

    The moral, I think, is that the determinate content of a state is, constituted by its relations not only to the environment (though that may well be necessary, especially for perceptual states) but also to other states and, in some (all?) cases, to behavior.

    To the extent that teleosemantic approaches meet this holistic constraint, I think they have a better chance. I hope friends of teleosemantics will weigh in on that.

  19. Eric Thomson

    The moral, I think, is that the determinate content of a state is,
    constituted by its relations not only to the environment (though that
    may well be necessary, especially for perceptual states) but also to
    other states and, in some (all?) cases, to behavior.

    I think this is a good point, and points to a weakness in Dretske’s approach, a weakness he has always acknowledged (especially the role of relations to other mental states in fixing content).

    Millikan is also focused on the output-side of things (pushme-pullyou). Dretske balks at a focus on behavior in KFI (see his discussion of ‘consequentialism’). He recently changed his mind on this in personal correspondence, when I pointed out motor representations in M1, and efferent copies in the saccade systems, are not sensory representations. I didn’t get the impression that he wished to follow up on it, unfortunately, but I hope he does.

    Regardless, those are details to be argued about, all in a context that sees information as a crucial, ineliminable, idea. They were on the right track, but not quite there yet. Dretske brought the baby back in from the puddle of bathwater, and the baby is here to stay even though it is not sufficient (and nobody has ever said it was sufficient: just read KFI).

    I just don’t understand why people would describe these developments so black and white as a failure when it so clearly pushed the discipline forward. Science works slowly, in fits and starts, with one group bringing in a piece of the puzzle. Godfrey-Smith mentions Explaining Behavior, and that is pretty common focus in the Dretske corpus. I think people miss out on just how much original material (and responses to all of the objections people still bring up) is in Dretske’s KFI that is not in EB.

    Incidentally, it is fascinating reading Consciousness and Content, KFI, Churchland’s ‘Stalking the wild epistemic engine’, and Bennett’s ‘Linguistic Behavior’ as they are all in orbit around the exact same issues, coming up with extremely similar ideas (esp with respect to the importance of being a system that can learn before you say it has semantic contents), but from slightly different angles. I see it as the salad days, and there are great ideas in all those works that still haven’t really been appreciated. Especially Bennett’s book ,because it seems so far afield from these issues superficially.

    As I’ve said, they were all working from the armchair, and the actual answers won’t likely come until the science has developed more, but it really is amazing stuff. I frankly don’t see the teleosemantic spin that became fashionable in the 90s adding all that much to the core story.

  20. Eric Thomson

    In practice, though, the neuroscientists try to describe things in a way that matches what is most likely relevant in the animal’s environment. You wouldn’t describe it as a ‘small moon rock detector’ even if it responds to such things. So it’s not as if there are no constraints, just specifying them might not be worth the cost, given the payoff is so minor to the experimentalist. If it ends up there is really an equivalence class of ‘referents’ that doesn’t seem to matter for the fly perceptual system/evolution.

    My comment number 1 below seems relevant here (about perceptual versus conceptual content, starts with ‘1. If someone swapped your perception of a rabbit for a perception of a
    bunch of undetached rabbit parts, would you be able to tell?’).

  21. Nick Georgalis

    When I said “ Allowing that informational states, as they have been conceived by theories of psychosemantics” the phrase ‘as they have been conceived by theories of psychosemantics’ was meant to include those richer theories, Eric mentions. I should have been clearer about that. In fact my Synthese paper has a detailed critique of Millikan’s and Dretske’s work, and my book also includes a detailed critique of Churchland. The claim is that these theories—exactly as the proponents expound them—cannot in principle account for the particularity of representation. The latter can be argued independently of Quinean arguments, as I have, but Quine’s argument can also be used. If: 1. a theory of representation must satisfy this requirement; 2. Dretske’s, Millikan’s, and other such theories cannot in principle satisfy these requirements, then these theories fail as theories of representation. Saying they fail is not say that they having nothing of interest to teach us. It is only to say that they cannot offer an adequate theory of representation.
    Since Eric was responding to me, I assume he had me in mind when he criticizes philosophers who reject theories while “offering nothing particularly as an alternative”. My work mentioned, in addition to arguing for 1 and 2, develops a detailed alternative. Whether it’s useful or not is another matter, but that could only be determined by examining it.

  22. David Pereplyotchik

    It’s true that Dretske does not describe his project as one of giving sufficient conditions. But that is how the project of naturalizing content is often perceived. I seem to recall Fodor characterizing it that way, but I don’t remember where.

    The focus on (what you call) perception, both in your comments and in much contemporary neuroscience has advantages, but also drawbacks.

    For the experimentalist, the big advantage has to do with designing experiments. While the details are often quite complex, the basic strategy is simple: manipulate the stimuli and record the neural responses.

    But, for the philosopher who is bent on characterizing the contents of mental states *in general*, there is a serious drawback to focusing myopically on perception. The obsession with perceptual states is somewhat justified, of course, given the rich empirical data that’s available to constrain the philosophical project. But I think it has the unfortunate effect of blinding the philosopher to the importance of the role that the state plays in the broader cognitive system. It makes it seem as though reliable correlation with stimuli–plus some bells and whistles, e.g., selection-for (Millikan), asymmetric dependence (Fodor), or “learning period” (Dretske)–will be sufficient to determine content. I urged in my previous post that this is not so.

    One might broaden one’s focus to include motor states, as you suggest. And, again, the experimentalist is happy, because all s/he has to do is figure out the relation between movements and neural events. But I think this has the same problem. It ignores the relationship between the state in question and all of the other mental states to which it’s related.

    A good way to see what’s wrong with the Fodor-Dretske program is to stop thinking about peripheral states for a moment–perceptions and motor states–and consider more “internal” states. It’s even better if we can get away from assertoric states–perceptions, judgments, beliefs, etc.–and focus instead on nonassertoric states, like wishes, hopes, fears, imaginings, supposings, wonderings-whether, being pleased that p, being surprised that p, being angry that p, being concerned that p, caring whether p, being interested in whether p, etc. (Note: Only *some* of these are emotions; not all.)

    All of the states I just listed have the following properties:

    a) They are clearly and obviously mental.

    b) They clearly and obviously have intentional contents.

    c) They are not, in general, correlated with any particular stimulus, nor with any specific motor response.

    d) Because of (c), they are not studied by neuroscientists nearly as much as perceptual and motor states. (They’re not totally ignored, of course, but you have to admit…)

    It seems to me that the contents of these states have a great deal to do with their relations to other mental states. Indeed, it is only in virtue of such relations that they are hooked up to the environment and to behavior at all.

  23. Eric Thomson

    One important consideration is that we like to study things that are amenable to a good animal model. We definitely tend to assume that what works in the rat will provide a key scaffolding for more complicated critters like monkeys and humans. This will bias us away from abstract conceptual propositional thought, toward emotion, perception, motivation, motor control (not just toward motor control and sensory coding, as you suggested). This seems a reasonable approach. Data fuel conceptual innovation. Rarely, if ever, do the key conceptual innovations come from people on the armchair. E.g., consider the place cell, the receptive field, dopaminergic reward-predicting neurons.

    But you are right, we are still not looking much at connections among mental states. I wonder how crucial that is in the rat…

  24. Hi Ken!

    Black dots and flies have very different causal powers, and there are actual selectional scenarios that differentiate between them. The way you describe the case – “maybe it’s worthwhile snapping at black dots” – makes it sound like “maybe the fly has black dots in mind, and gets flies as a result.” But you don’t get to talk about what the fly has in mind any more than I do. All we can do is point to anything that might make content more specific in this case.

    There are two things the teleosemanticist can point to. First, there is a causal system of producer (tectal neurons), varying intermediates produced by it, and the consumer (tongue flicking mechanism). Second, there’s the causal explanation of how this system has contributed to differential survival. The tongue flicking (consumer action) has contributed to survival because it results in the ingestion of nutritious proteins, carbohydrates etc. (Not black dots – this is straightforward causal explanation.) And when you look at how the producer contributes to this explanation, it’s because (again, straightforward causal explanation) it produces tectal activations that map onto locations of little flying things. Put these together, and you’ve got: the intermediates are supposed to map onto locations of little flying nutritious things. (Depending on the design principles of the producer and/or consumer, it could be more specific than that. E.g. maybe the producer has a way of distinguishing bugs from seeds; or there has been, historically, a differential contribution to fitness of bugs vs. seeds.) This is general, but not objectionably “indeterminate.”

    Of course what we really care about, with respect to determinacy, is our own concepts. What evolution designed here are not, in general, specific concepts, but a concept formation mechanism. In my https://homepage.mac.com/ancientportraits/drsite/SINBAD_Semantics.pdf I argue that the concept formation system is designed to produce isomorphisms to regularity structures involving individuals, kinds, and other “sources of correlation” or “sources of mutual information” – roughly, things that support inductions. (Connections here with Millikan’s On Clear and Confused Ideas.) Again, it depends on the specific design principles of the mechanism. But this is how our concepts can go beyond the representational capacities of our visual systems – supposing, for the sake of argument, that teleosemantics dictates our perceptual colour representations are supposed to track fruit. (Highly doubtful, even at the retinal or thalamic level!)

    I don’t have a problem with it turning out that some of our perceptual contents are affordance-like! (Do you think you know the contents of your perceptual representations a priori?) But it will depend on the details, of course.

  25. Hi Pete! No, I don’t think this commits me to saying that it’s impossible to represent undetached rabbit parts, either atomically or compositionally. First, atomic representation: undetached rabbit parts have different causal powers from rabbits, so there should be some (admittedly weird) selectional scenario in which what’s selected for is tracking undetached rabbit parts. Because this sort of selectional scenario would be so unusual, though, one would expect representation of undetached rabbit parts always to be compositional, via “undetached”, “rabbit”, and “part”.

  26. It’s ironic that you quote Godfrey-Smith in support of your move to a version of causal role semantics. I think the main reason PGS is pessimistic about the naturalization project is that what he takes to be the best theory – Millikan’s view – can’t explain how truth can be a fuel for success. But causal role semantics has even worse problems with this (not to mention with error, reference etc. And I agree with Fodor that holism isn’t a virtue!)

    There’s no reason why a teleological approach can’t be married with causal role semantics. (I think van Gulick toyed with this idea.) I just think that most of the relevant functions will be outside-world directed. That’s what matters to the organism, when it comes to perception and thought. (Though there will be plenty of internal Normal conditions for performance of these functions.) Also, we need to be careful not to confuse ways we have of identifying the content of beliefs and desires with the facts that determine those contents.

  27. David Pereplyotchik

    I completely agree that our theories of the mind should be applicable to all creatures. And it’s certainly true that progress in cognitive science has often come from animal models. By contrast, relatively little progress (if any at all) has come from armchair speculation.

    One issue that seems to be looming here is how exactly we should conceive of the relationship between work in cognitive neuroscience and the philosophical project of naturalizing (determinate) content. The discoveries of the place cell, the receptive field, dopaminergic reward-predicting neurons, and so on are obviously hugely important achievements. But, having made them, can the neuroscientist claim to have addressed the question of what content some neural event has? It’s not so clear to me that s/he can. If content were just correlation+bells-and-whistles (see my previous post) then the answer would seem to be “Yes.” But the project of naturalizing content has, I think, much broader concerns. It seeks to explain behavior and cognitive functioning by reference to the intentional content of various mental states. It’s not at all clear to me that pinning down the conditions with which a neural event is correlated will answer the kind of question that Fodor, Dretske, and others want to answer.

    You ask how crucial connections between states might be in studying the rat.

    1. Naturalistic theories of content are supposed to apply to all creatures, including humans. If Fodor or Dretsky were to learn that their theory delivers determinate content assignments only for states of the rat (or the c. elegans worm) they would, I think, consider themselves to have failed.

    2. The rat is an extremely smart animal. It doesn’t know about quarks and democrats, sure, but its cognitive economy is surely rich enough that connections between its internal states will play a major role in determining the contents of those states. The rat is not, after all, a Sphex wasp, whose behavior is notoriously inflexible. Its desires, plans, and emotions evolve dynamically and make essential contributions to its behavior.

    This is a good place to note what seems to be a point of disagreement between us. You seem to take emotion to have less to do with conceptual representation than propositional thought does. By contrast, I think that emotions have intentional contents; consider being angry, worried, surprised, depressed, joyous, sad, elated, or frustrated THAT P.

    Also, it seems to me hard to explain rat behavior without supposing that rats wonder whether p. Wondering–asking questions internally, as it were–is another state that is only indirectly linked to stimuli and behavior.
    ____

    Finally, I’d like to point out that many of the comments above reference the disjunction problem and the related worries pertaining to Quine’s indeterminacy puzzle. I’m surprised that no one mentioned the hard problems regarding representation of very abstract or simply nonexistent things (e.g., the internet & phlogiston).

  28. Hi again David! I agree that it’s myopic to focus only on perception. But you’re wrong to think that Millikan does this, or that her theory is “reliable correlation plus selection-for.” (That’s closer to a description of Dretske.) Hers isn’t an information-based theory.

    My model-based theory also pays attention to the full range of attitudes. The rough idea is that we have internal models (with teleosemantically determined contents), and those models can occupy various causal roles to implement the attitudes – classic psychofunctionalism, really. See chapter 5: https://homepage.mac.com/ancientportraits/drsite/book.pdf .

  29. “I’m surprised that no one mentioned the hard problems regarding representation of very abstract or simply nonexistent things (e.g., the internet & phlogiston).”

    the very first comment to Gualtierro’s post brings up intentional inexistents

    🙂

    Anyway, David, I’ve been enjoying your remarks.

  30. David Pereplyotchik

    Very embarrassed about the ‘Dretsky’ typo in my previous poset. It’s so hideous!

    Thanks to Dan Ryder for the replies.

    I did not mean to suggest that PGR holds the view that I favor. I quoted him mostly to extend Adam Pautz’s list of published responses to the question that prompted this discussion. I also happen to agree with everything he says in that passage–indeed, as I recall, in that entire paper. If he disagrees with me about holism or CRS, then so be it.

    I’m not sure what problems you think CRS has with regard to error and reference, nor why you agree with Fodor about holism. His arguments from the publicity of determinate content do not strike me as at all persuasive. Nor do his arguments against the coherence of the notion “similarity of content.” (I wonder whether there’s a growing consensus about this as well; it would be nice to learn that there is.)

    Perhaps these things are made clear in your book. I am grateful for the link to the PDF file. Many thanks for that!

    My apologies for the crude characterization of Millikan’s view. I am not as familiar with her work as I would like to be, so I should not have made any claims about it.

  31. Thanks for the response, Dan! (Your stuff is always quite interesting. I’m grateful for your contributions to this thread.)

    I’m not quite following the causal-powers part of your response, though. Given that every experimental condition that has rabbits present is also going to have URPs present, and vice versa, what basis is there for distinguishing their alleged causal differences?

  32. David Pereplyotchik

    There’s nothing like writing a sentence that expresses your frustration with a typo and ending that very sentence with a typo!

    Thanks, Pete, for pointing out Gualtierro’s mention of intentional inexistents. I must have forgotten about it by the time I finished reading all of the other comments.

    I wonder if Dan Ryder can say a word about how either his view or Millikan’s addresses the issue of intentional inexistents–phlogiston, fairies, angels, gods, and, more generally, the denizens of fictional works, written and spoken.

  33. PGS writes great stuff! I’m co-editing Millikan & Critics for Blackwell, and we have something in there from him, comparing Ruth’s stuff to Skyrms’ recent work on signalling systems. That book will be out next year.

    There’s a summary of where I think things stand with regard to CRS in my review of naturalized psychosemantics from the Routledge Companion to the Philosophy of Psychology: https://homepage.mac.com/ancientportraits/drsite/Naturalizing_content.pdf . (In Models in the Brain too, but easier to find in the article!)

    “Dretsky” makes it more like “Gretzky”, which is a good thing in my (Canadian) book. 🙂

  34. Well, I just happen to have a work-in-progress on that very topic! 🙂 (Thanks in part to Pete Mandik, who will be in the acknowledgements section as soon as it exists!) Comments gratefully received…
    https://homepage.mac.com/ancientportraits/drsite/empty_concepts.pdf
    The abstract: “Externalist theories of representation (including most naturalistic psychosemantic theories) typically require some relation to obtain between a representation and what it represents. As a result, empty concepts cause problems for such theories. I offer a naturalistic and externalist account of empty concepts that shows how they can be shared across individuals. On this account, the brain is a general-purpose model-building machine, where items in the world serve as templates for model construction. Shareable empty concepts arise when there is a common template for different individuals’ concepts, but where this template is not what the concept denotes.”

  35. Eric Thomson

    Not sure about Millikan, but Dretske handles them by positing an original set of basic concepts (e.g., horse, horn). Once these are acquired in an informational mileau, they can be combined into new complex concepts (e.g., unicorn) that fail to refer.

  36. Thanks, and same to you, Pete! Here’s a quick answer: a rabbit and an undetached rabbit part are different things, so they have different properties: ergo they have different causal powers. (For example, a rabbit and an undetached rabbit liver reflect light rather differently. The latter reflects light similarly to how a detached rabbit liver does.)

    So, for example, there could be a causal process whereby any individual rabbit part will produce effect E, but if *all* the parts enter into the process, effect E fails to obtain. We could build a scanner to do this. (Things are different if the indeterminacy is between “rabbit” and “the full complement of undetached rabbit parts”, but that’s not a case of inscrutable reference – they aren’t different things.)

  37. Dan, thanks for posting the link. I remember discussing this stuff on the Brain Hammer blog like a million years ago. I skimmed the paper quickly and was quite pleased to discover a blob named after me! I look forward to giving the paper the thorough read it deserves. Cheers!

  38. Wow, I can’t believe I’ve been missing this thread. I’m going to read it carefully but, in the meantime, I would like to say that I am one of the people who believes teleosemantics to be a viable project. In fact I’ve devoted most of my PhD dissertation to defending a teleosemantics of sorts from the usual host of objections (indeterminacy, empty concepts, etc.).

    If anyone is interested, I have a draft about my “solution” for the indeterminacy problem here: https://www.manolomartinez.net/wp-content/uploads/2010/11/Back-to-the-IP-Nov-2010.pdf

    Manolo

  39. kenneth aizawa

    Dan,

    A few points:
    ” The way you describe the case – “maybe it’s worthwhile snapping at
    black dots” – makes it sound like “maybe the fly has black dots in mind,
    and gets flies as a result.” But you don’t get to talk about what the
    fly has in mind any more than I do.”

    I don’t mean this in the way you describe.  It causally snaps at black dots.  In fact, I take it that the frog will snap at moving black dots in preference to flies that don’t move.  This is some basis for saying that it does not represent flies per se, but rather black dots, but that representing moving black dots is part of a good way to catch flies.  This seems like a perfectly sound naturalistic, selectionist account of how the detectors came to be.  (And neuroscientists typically don’t postulate moon rock detectors, since neuron firings typically don’t correlate as well with the presence of moon rocks as with other things.)

    “Of course what we really care about, with respect to determinacy, is our own concepts.”

    Ok.  Take another example.  What John (the consumer) might need of a potential mate is something like Jane’s reproductive fitness.  The way this might work is that John’s  visual system (the producer) is most strongly correlated with is facial symmetry.  What the teleosemantic theory seems to be committed to is the idea that the visual system does not represent facial symmetry, but reproductive fitness.  Reproductive fitness is what the consumer needs.  So, the theory seems to yield the wrong contents. 

    Notice that it is a perfectly good evolutionary selectionist story to say that the visual system represents facial symmetry.  There is, in the teleosemantic literature, this idea that the facial symmetry kind of story (like the black dot story) is somehow bad evolutionary biology.

    “I don’t have a problem with it turning out that some of our perceptual contents are affordance-like! “

    That’s not the character of the objection; rather it is that according to the teleosemantic line, all of our perceptual contents are affordance-like.  That seems wrong.

  40. Ken (if I may jump in),

    What you say is esentially right, and that’s why the following toy teleosemantic theory will not do:

    (TTT) M has the content ‘There is an F around’ if it is its function to indicate/track information about Fs.

    I don’t think many teleosemanticists (as opposed to researchers working in neigbouring fields with sympathy for teleosemantics) defend TTT or any of its one-line variants.

    One point in which one can be subtler than TTT is is in specifying what counts as the relevant consumer. In your John/Jane example, it’s likely that the consumer of whatever visual modules search for facial symmetry are further subpersonal modules. According to Millikan, the representation’s real value (it content, more or less) is whatever it is that has Normally explained that representations of this kind have helped their (subpersonal) consumer fulfil its function. With a careful specification of what this function is one can avoid the implausible consequence that this representation means reproductive fitness.

    Btw, it should be noted that this representation is not John’s visual experience — what fixes the content of that other thing seems a much more complicated business.

  41. ken, you say “according to the teleosemantic line, all of our perceptual contents are affordance-like” (you mention contents like *so and so is nutritious* earlier, as opposed to *so and so is red*). i think that’s an interesting objection.

    (i’m not sure i’d endorse yr strong claim, but i make a similar point at pp. 51-2 here: https://webspace.utexas.edu/arp424/www/simple2.pdf – i say a teleosemantic theory like millikan’s runs the risk of entailing that some of our actual taste experiences, for instance, only represent properties like being bad or being poisonous, and not the chemical properties our taste systems are causally sensitive to. on intentionalism about phenomenology, that’s a problem, because such properties are too coarse-grained to fix the phenomenal character of those experiences.)

    could you point me to some discussions of this sort of point? i take it that it is related to Pietroski’s hypothetical snorf counterexample. the worry there is that Millikan’s brand of teleosemantic theory entails what is represented is only *snorf-free* when what we want is *red*. in response millikan bites the bullet, saying only *snorf-free* is represented and not *red*. (tho in discussion she told me that she thinks her view entails that our actual color experiences do represent colors understood as reflectance properties, just like (say) tye’s tracking theory.) by contrast, papineau says that in this case he can get *red* in his intro to his recent teleosemantics volume. anyhow Pietroski does not make your strong claim – “according to the teleosemantic line, all of our [actual] perceptual contents are affordance-like.” do you know of anyone whose makes this sort of strong claim? again, i ask because i’m very interested in this general issue, and thinks it’s an interesting objection.

  42. kenneth aizawa

    Hi, Manolo,

    I’m sure there is plenty of positive tweaking to be done, but there is corresponding critical tweaking to be done.  Many folks, I think, have seen the basic character of the problem and have a sense of how to do the tweaking.  But, Eric is exactly right that there is rarely much profit in writing a critical paper that fleshes out these details.

  43. kenneth aizawa

    Hi, Adam,

    ”m not sure i’d endorse yr strong claim”
    Well, I am trying to keep the discussion simple, so that I could make things a bit incautious.  But, this is, after all, only a blog post and not a philosophy paper.  So, I’m ok with that.

    I don’t actually know of any published discussions of this.  I remember at the 1993 NEH Summer Institute that Fodor was giving a Millikan a very hard time about this kind of thing.  His example was trout and trout shadow and bears hunting trout.  My take was that she was unimpressed by the criticism, but that Fodor was exasperated by this lack of impression.  So, my sense was that the basic problem was out there.  20 or so eager assistant professors from around the country heard it.  So, no one there really had any incentive to write it up as their new idea.  It was “what everyone knew”.  My guess is that this is the sort of problem that might  have had to have been rediscovered.

  44. Hey Ken, all,

    I was one of the Berkeley PhDs there. Speaking for myself: my diss was all about psychosemantics; I argued that what it was missing was a relation to normativity, some concept of being really valuable to the system that had these informational states. I’d be inclined now to say that something in the neighborhood of psychosemantics is the only game I know of, and that it has serious, indeed foundatational, problems. But (as Pete pointed out) what doesn’t? Especially: what interesting useful philosophy doesn’t?

    One new interesting take on the area is coming from Berkeley philosophy Hannah Ginsborg.

    I wish I recall that vote. I might have voted for Searle only because he seems pretty resolutely in favor of one of the themes here: there will be no non-questionbegging, non-circular reduction of intentionality to anything else.

    Cheers,
    Tony

  45. Hi Ken! Crudely, there are three theoretical moves available within teleosemantics: take the producer side to determine content (moving black dot), the consumer side (nutritious thing), or both (moving black nutritious dotty thing). You seem just to be stipulating, contra Millikan, that we should consider only the producer side. Whereas the growing consensus among teleo folks seems to be that you need to consider both sides.

    Maybe your reason to ignore consumers is the worry that it delivers only affordance-like perceptual contents. I don’t think there’s much of a risk that teleosemantics would deliver this result. For instance, many perceptual systems produce representations that are consumed by a wide variety of consumer processes or systems, where the only univocal explanation for how the producer has contributed to fitness involves tracking spatial geometry – not just being conducive to step-climbing, for instance. (But see my “general point” below.)

    Now, maybe Pietroski’s objection (mentioned by Adam) based on the snorf/kimu example is a theoretical reason to consider only the producer side. Pietroski’s worry is that when you focus on the consumer side (either exclusively or in part) in determining content, the contents delivered (absence of snorfs, nutritious black things, high oxygenation or whatever) can’t play the right role in rationalizing the organism’s behaviour. And if they can’t do that, this must be “content” in some other sense than what we mean when we’re talking about beliefs and desires etc. Is that one of the reasons you want to insist on paying attention only to the producer side? (My response is in my “general point” below.)

    Re: the facial symmetry example, Manolo is right. (Although I think it’s better to frame it in terms of what Normally causally explains trait proliferation via the consumer, rather than talking about what allows the consumer to fulfill its function – though they amount to much the same thing.)

    General point: What I personally care about is mental representation, which (in us) generally means cortical representation. And if my SINBAD account of cortical representation is correct – that it’s a general-purpose model-building machine that is supposed to construct isomorphisms to regularities involving individuals, kinds, and other “sources of mutual information” – then there aren’t any worries about indeterminacy, excessive representation of affordances, snorfs, etc. within our mental representations (including perceptual ones). It really does depend on the details here, as Manolo intimated. https://homepage.mac.com/ancientportraits/drsite/SINBAD_Semantics.pdf And frankly I don’t much care how it turns out with the frogs, and I don’t think you should either. 🙂

  46. Well, we’ve been busy doing the positive tweaking in the literature, but the responses we tend to get in conversation (at least from the penumbra of Rutgers) don’t seem to take account of recent work – or even Millikan’s original work. (I don’t mean in this thread.) Maybe it all comes from Fodor’s mistakes about evolutionary theory, as amply demonstrated by his very properly panned book with Piattelli-Palmarini. At least, I think it’s his mental block about “selection-for” that explains why he gets exasperated with Millikan. And it’s because Millikan *does* understand evolutionary theory that she gets exasperated with Fodor! 🙂

  47. I’ve seen two papers recently that are vaguely Rutgers-related and argue that inscrutibility is an unsolved problem — not so much for psychosemantics, but the point would apply just as much there:

    J. Hawthorne. Craziness and metasemantics. Philosophical Review, 116(3):427–440, 2007.

    J. R. G. Williams. Eligibility and inscrutibility. Philosophical Review, pages 361–99, 2007

  48. I think what Dan says is interesting: “many perceptual systems produce representations that are consumed by a wide variety of consumer processes or systems, where the only univocal explanation for how the producer has contributed to fitness involves tracking spatial geometry – not just being conducive to step-climbing, for instance.” Just wanted to say – in case anyone’s interested – that Papineau’s response (which alluded to in my previous comment) to the snorf example (under one elaboration) is along exactly these lines. (Not sure I accept it – I criticize the move in the afore-linked-to paper – but I think it is probably the best response.)

  49. kenneth aizawa

    Dan,

    “You seem just to be stipulating, contra Millikan, that we should consider only the producer side”

    I don’t think so.   I am running the argument as a reductio.  The idea is that if we let Millikan, say, have her way and let the consumer (John) determine the content of what the visual system (the producer) emits, then the representation output by the visual system means reproductive fitness.  Maybe you don’t take this to be a reductio.  But, my understanding of the bit of the scientific literature I have seen on this is that the scientists take it that humans track and represent facial symmetry as proxy for reproductive fitness.  Humans don’t represent reproductive fitness.  I could be mistaken in my understanding of the science or the scientists could be wrong, but I don’t think I’m in the business of stipulating anything.  I’m just trying to keep my philosophy scientifically informed.

    “Maybe your reason to ignore consumers is the worry that it delivers only affordance-like perceptual contents.”
    I don’t know that it is the appeal to consumes per se that delivers only affordance-like perceptual contents.  It seems to me that that is just Millikan’s view in “Biosemantics” circa p. 291 that representations are of affordance-like things.  That is her means of giving a “correct” univocal content to the bacterial magnetosome.  In that passage, the bacterium represents oxygen-free water and not some magnetic direction, since having oxygen-free water is what “matters” to the bacterium.  So, I’m taking the idea of what matters to be something affordance-like. 

    What the magnetosome represents then is univocal; it represents only the direction of oxygen-free water. For that is the only thing that corresponds (by a compositional rule) to it, the absence of which would matter-the absence of which would disrupt the function of those mechanisms which rely on the magnetosome for guidance. (“Biosemantics” p. 291)

    Now, maybe you don’t follow Millikan here.  And perhaps you can make the case that spatial geometry is what matters to some consumer.  But, then, that’s what I take to be treating spatial geometry as an affordance.  My understanding of the Ecological Psychologists is that they sometimes differ among themselves as to what is or is not an affordance.

  50. Bill

    I completely agree with the above. However, I have a concern about the lack of clear implications for psychosemantics of a theory of rat intentionality. There is reason to consider that most non-human cognition is non-verbal, so I think working science of rodent intentionality is going to completely bypass the whole rabbit/rabbit undetached parts type issues, which really have to do with the intentionality of verbal language.

  51. David Pereplyotchik

    I suspect that many of the folks who are contributing to this discussion will want to argue that the intentionality of verbal language is somehow derived from the intentionality of internal states of one stripe or another. That seems to be a dominant view in the philosophy of language and mind. Call it the Received View (RV).

    I suspect that proponents of RV will be puzzled by your worry. They will want to know what it means, to say that “most non-human cognition is non-verbal.” Is human cognition, by contrast, verbal?

    RV has it that the intentional content of verbal episodes is derived from cognitive states. These states may be language-like, as Fodor has argued since his famous book, Language of Thought (1975). If Fodor is right, then some of the cognitive states of rodents are language-like as well, despite the fact that the rodent does not produce speech acts or any other type of verbal behavior. The Quinean indeterminacy issues will arise for these language-like states just as easily as they do for the speech acts of adult human beings.

    For my part, I am persuaded by the arguments in David Rosenthal’s paper “Intentionality” that RV has little going for it. Nevertheless, I am skeptical that a “working science of rodent intentionality is going to completely bypass the whole rabbit/rabbit undetached parts type issues.”

    As I wrote in the post to which you responded, rats are very smart animals, who exhibit complex, flexible, and oftentimes rational responses to events in their environment. The flexibility of their behavior leads us to posit states in the rat that go well beyond simple detection and sensory-motor connections. Such states would yield rigid, inflexible behavior, as they do in the Sphex wasp. In order to guide behavior in the ways that they do, the internal states of rats would have to be representational in a more full-blooded sense than mere detection-and-response mechanisms. They would have to, at a minimum, keep track of objects and properties in the environment, thus conceptualizing the world–though perhaps in a different way than we do. The capacity to do this does not, I think, rest on a prior capacity to participate in the kinds of linguistic practices that we find in human communities. On this point, I have to part ways with Robert Brandom.

    Now, as soon as we take the step of positing rich internal states in the rat, we face the issues surrounding determinate content. It seems to me that the Quinean inscrutability worries resurface here, regardless of whether the rat’s cognitive states are language-like in Fodor’s sense.

  52. Eric Thomson

    Are there published papers about rats exhibiting such rational behavior? I work with rats every day, and  while I think they are brilliant little operant learners, not sure if they are “rational” (define ‘rational’ is the problem of course). When I try to get them to be flexible in their behavior once they’ve learned one way to do something, it is very tough. But they are extremely sensitive to reward, easy to motivate by depriving them of food or water, and become amazing operant learners. I am not sure there is anything like conditional reasoning in rats demonstrated in a compelling way. Or negation. Overall I think Dretske’s framework works really well for the emergence of an initial representational system, especially in relatively simple systems whose “rationality” amounts to being very expert at operant conditioning. I.e., nearly all animals.

    Incidentally, in our lab we do see very different sensory responses depending
    on the behavioral state of the rat. E.g., if it is sitting there
    quietly versus whisking we see very different responses, and have some
    ideas about what this means for sensory representations. So context matters. Is that an example of content being influenced by other mental states? Perhaps. Is it hard to fit into a Dretskian framework? Not at all. Indeed, it is natural to think about it in informational terms. Think of it like attention, which can increase the signal-to-noise ratio of a sensory region. That’s an informational characterization of an intramental influence on sensory representations.

    You might actually agree with me I’m just shooting from the hip not actually disagreeing with anyone in particular in my previous paragraph. As for content-sensitivity in rats (especially specific contents), I’m not as sure.

  53. David Pereplyotchik

    If you work with rats, then you are much more of an authority on their behavior (and the studies thereof) than I am. My experience with rats is limited to watching them scurry around on NYC subway stations. It’s these behaviors that I had in mind in my posts, not anything I’ve read in published studies.

    Dennett has made famous a description of the behavior of the Sphex wasp. The wasp routinely checks its nest before bringing in food for its eggs. If the food is moved by about an inch while the wasp is in the nest, the wasp will move the food back to its original spot and repeat the nest-check subroutine. It will do this over and over without learning or adopting a new strategy.

    This rigid, inflexible behavior contrasts sharply with the behaviors of rats, who learn quickly and hence respond flexibly to obstacles and threats. As Mark Okrent argues in his book _Rational Animals_ , this is what compels us to attribute beliefs and desires to the animal–conceived of as states that interact to give rise to what Okrent calls instrumental rationality. Okrent doesn’t discuss rats, but rather the famous broken-wing display of the plover. I doubt that rats are more like wasps than like plovers. Perhaps you disagree, in which case I defer to your judgment. But this is the sort of thing I had in mind.

    I’m not sure what you mean by “conditional reasoning.” When the plover engages in the broken-wing display (but only when protecting its nest), and adjusts its behavior to that of the predator/threat, is it engaging in conditional reasoning? If not, then I don’t see conditional reasoning as a necessary condition for instrumental rationality or the possession of beliefs, desires, or intentions.

    As I mentioned before, Dennett (1969) discusses an example in which a neural event (N) in a dog is reliably elicited by food, but the dog does not treat the item as food. Instead of sniffing and eating the stuff, the dog sits on it. Should we assign the content ‘That is food’ to N? I think Dretske has to say yes. But the attribution of this content plays no role in explaining the dog’s behavior. That strikes me as a problem for informational semantics. Content ascriptions are supposed to play an explanatory role, after all.

    I think what has gone wrong here is that Dretske’s view takes into account only the relationship between N and the environment when assigning contents. This gives rise to content ascriptions that play no role in explanation. By contrast, a view that takes into account the relationship between N and the downstream states and behaviors, would assigna different content to N, which would presumably play a role in the explanation of the dog’s behavior. I think the same goes for the plover and the rat. Do you agree?

    Finally, I worry about your liberal appeals to operant conditioning. Are you using a different notion than Skinner’s? If not, I would have thought that standard antibehaviorist arguments serve to discredit the notion. Am I mistaken?

  54. Eric Thomson

    If by ‘rational’ you mean ‘instrumental rationality’, then we probably aren’t all that far apart. I’m not buying into a pan-Skinnerian view, but I also worry that people don’t realize how powerful operant conditioning is, and how rational the results appear, and how tempting it is to anthropormorphize the results (e.g., a rat following a maze).

    Nearly all of the behavior of the rat that seems intelligent has been shaped by operant conditioning. That doesn’t  mean it isn’t intelligent, but it also doesn’t mean it involves the kind of more holistic content-fixation mechanisms you seemed to have in mind in your original posts. It also doesn’t mean such mechanisms are not involved!  I’m frankly not sure. (I’m not familiar with the plover example, incidentally.)

    By ‘conditional reasoning’ I just meant something like the performance of modus ponens operating over explicit propositional contents implemented in the rat’s brain. I think we do not see that. But I could be convinced I am wrong. I take that as a hallmark of human rationality (which is more than just instrumental rationality). I think I was reading too much into your description of rats as rational, though, as it seems you didn’t mean to ascribe anything so high-level to rats.

    You are right that Dretske does often focus exclusively on the sensory side of things, so that all content ultimately comes in through sensory transducers. I find this a problem because it is pretty clear that M1 and the saccade system have motor representations whose role is to carry information about movements of the body (efference copy). But Chapter 6 of EB he discusses the ‘interactive’ nature of reasons, and the importance of conceptual role in fixing content (though he does say that without the external ‘indicator’ relation you won’t get the project off the ground, and I think that’s right).

    In terms of explanatory impotency of these informational states, that is not something that can easily be leveled at Dretske. He, more than most, has done a lot of work to give the information causal/explanatory bite (e.g., his article Reasons and Causes focuses on this question, and of course his book Explaining Behavior is largely addressed to this).  Even if state N didn’t play a role in explaining the dog’s behavior in a particular case, that doesn’t mean state N wasn’t baptized in sets of cases where it did, such that it typically had this role in the dog’s system (this is the whole thrust of Explaining Behavior, really). We could get into more details about this if you are interested, I don’t want to write another 1000 words in this comment though.

  55. John Dilworth

    Apologies for coming in late to this, but I agree with Gualtiero’s original point that there is “something right about psychosemantics”. In my view an adequate theory must address both issues of input causality and of output causality (behavioral dispositions and the like). Psychosemantics failed because it addressed only input causality issues. If anyone is interested, here are two attempts to address both issues simultaneously (comments welcome):

    https://homepages.wmich.edu/~dilworth/Semantic_Naturalization_Via_Interactive_Perceptual_Causality.pdf

    https://homepages.wmich.edu/~dilworth/More_on_the_Interactive_Indexing_Semantic_Theory.pdf

  56. Hi Ken – Sorry for the delayed reply; I was away camping. You said: “The idea is that if we let Millikan, say, have her way and let the consumer (John) determine the content of what the visual system (the producer) emits, then the representation output by the visual system means reproductive fitness.” But now you’re ignoring the producer! (And I also think you’re misidentifying the consumer, which should be downstream cortical systems, not John himself.)

    Millikan requires that the producer is able to effect a correspondence (according to some mapping rule) between the representation produced and some world affair. The producer – in this case the symmetry detector, some portion of the visual system (probably cortical) – can’t make its produced representations correspond with fitness enhancement in general. It can only produce a correspondence to a specific kind of fitness enhancer, namely symmetrical ones. The correct description of the representations produced by the symmetry detector will depend on the details of how it has, in the past, enhanced fitness – maybe it’s really an animal detector, for instance. According to Millikan, you have to figure out what the most general, proximal explanation is for its selection. This explanation adverts to *both* the producer and the consumer.

    Note that this symmetry detection system is also piggybacking on top of a more general visual cortical system that effects correspondences to spatial layout, which explains how downstream systems (pretty much the rest of the cortex) have been able to enhance fitness.

    “It seems to me that that is just Millikan’s view in “Biosemantics” circa p. 291 that representations are of affordance-like things.” She thinks most simple representations are pushmi-pullyus, and so (partially) affordance-like. See On Clear and Confused Ideas (Cambridge, 2000) and especially Varieties of Meaning (MIT, 2004 – her Nicod lectures) for how non-affordance-like representations come about. It’s important not just to rely on Biosemantics, a very short paper in which she was over-emphasizing the role of the consumer in order to combat a neglect she saw in the literature.

  57. Steven

    A more ‘interactionist’ approach to naturalizing content is attractive for a lot of reasons (and if memory serves, PGS recommends moving in a similar direction in the article mentioned above), but I’m worried about the usual holist downsides of more enriched accounts. There’s a remark from Josiah Royce that illustrates the dangers of sliding too far into conceptual holism/pragmatism, it goes roughly like this: “anyone who would stick his hand in a tiger cage doesn’t understand the meaning of the word ‘tiger’.” It’s that sort of claim that makes purely causal theories of reference fixing more appealing (and the error problem looms as much as ever). I look forward to reading your papers.

  58. kenneth aizawa

    Hi, Dan,

    There are multiple points to which to reply here.  Let me just pick one for the moment.

    It is unclear that the way I which I identify the consumer matters that much.  So, let the visual system be an initial producer, then insert an intermediate consumer of the visual system that takes output from the visual system, then responds, then let John use the output of the intermediate consumer.  Let symmetry as tracked by the visual system be tracked by the intermediate system and let what is output by the intermediate system again be correlated with fitness (as is symmetry).  So, still John gets to Jane’s reproductive fitness by way of an intermediary.  So what?  John still tracks this other thing, but does not have a representation of Jane’s fitness.   You have to do a lot more that just posit an intermediary consumer/producer to solve the problem.

  59. Hi, Ken. There’s this piece of Millikanian doctrine that may be helpful here: what intentional icons (roughly, contentful states) have as content, according to Millikan, is the *most proximate* Normal explanation for the fact that these icons have done what they are supposed to do in John’s mental economy.

    The reference to most-proximate explanations is there precisely to filter out perfectly good explanations such as “John’s visual mechanism M exists because its ancestors tracked reproductive fitness”. That’s indeed an explanation of the existence of the visual mechanism, but not the most proximate one.

    Manolo

  60. Eric Thomson

    Not sure what the problem is here. My retina functions as a phototransducer. Sure, at some level this increases fitness. But that doesn’t mean their primary function isn’t to phototransduce, that it isn’t their role within the system in which they are embedded. The heart pumps blood, that’s its function. Sure, at some level it also increases fitness but that sort of misses the causal point: it increases fitness overall because it pumps blood. Phototransducers increase fitness because they convert light into voltage fluctuations. The carburetor in my car ultimately helps my car move, but that would be a myopic view of its function (it makes my car move) because it ignores the local causal role of the carb (where are its inputs coming from, what are its consumers).

    Ironically, Fodor makes this point rather forcefully though I can’t remember which book/essay.

    On the role of outputs/consumers, in the neuro literature people also talk about needing to know the ‘projective field’ of a neuron in addition to its ‘receptive field’ in order to fully characterize its information processing role. This langauge I think was originally from Sejnowski. My graduate work addressed the fact that leech sensory neuron spike timing carries a great deal of information about touch location. However, it seems to not be used very much by downstream networks that actually control behavior wrt touch location. I talk about this as the ‘decoding problem’: just because a particular variable (e.g., correlations, spike timing) carries a lot of information about the world, that doesn’t mean the brain actually uses the information encoded in that format to interact with the world. It might use something much simpler like the spike rate or count (because such things can more readily influence downstream neuronal networks).

  61. This thread-column is getting pretty skinny, so I chose to reply to Ken’s original comment!

    I agree with Manolo that the “most proximate” explanation helps here, and I think Eric’s reply is pointing to the same thing. But I disagree that the “fitness tracking” explanation is remotely a good one. Here’s why.

    Ken says: “and let what is output by the intermediate system again be correlated with fitness (as is symmetry).” But it won’t be so correlated, at least if what you have in mind is something like Pearson’s r. There are vast numbers of local fitness enhancing states of affairs (presence of food, presence of mate, presence of predator etc. etc.) with which the symmetry detector’s representations are not coinstantiated, which means the correlation between the symmetry detector’s representations and fitness is effectively zero. This non-correlation would then form part of an extremely poor explanation for why this producer-consumer system was selected. That’s what I mean by: you have to pay attention to the producer too. To “effect” a correspondence, the producer has to be able to produce correspondences (in this case a correlation) that help explain selection. When the correlation is effectively zero, there’s no explanation – or at least in this case there is obviously a much better “proximate” explanation available!

  62. Wow–a truly excellent discussion, all around.

    From the research of posters here alone, I’d conclude that psychosemantics ain’t dead yet. I wonder if Fodor’s long shadow at Rutgers accounts for the premature reports of the psychosemantic’s death? 😉

    I continue to wonder, though, what dire consequences follow if there is some residual indeterminacy in the systems proposed by Dan, Manolo, and others. The amazing precision possible in language may mislead us into looking for such precision in the mind. Perhaps it’s only when a full-blown language system gets layered on top of SINBAD or a homeostatic-property-cluster detector system that things get (the appearance of?) real determinacy.

    Obviously, my Dennettian tendencies are showing, but I’ve always felt that Fodor had tilted the field in this game. Anyway, why think a frog (or a hoverfly!) has states with content so precise? Especially given that it behaves the same regardless (isn’t that–in some sense–the problem?). Or is the problem of accounting for the addition of language to such an imprecise system so daunting that we must locate the precision earlier on?

    Again, thanks y’all–this is the best thread I’ve read in a long time (my own lame posts excluded!).

  63. Hey Josh, this is just a quick thought in response to your last couple posts. in your previous post you suggested we should be open to the view that there lots of indeterminacy in *thought* and *language* – i take it that the idea was that maybe we just can’t solve Quine’s problem, or Kripkenstein’s problem (and you mention Fodor’s disjunction problem). i think i agree with that. (btw, field has a cool paper in the vicinity – ‘some thoughts on radical indeterminacy’.) in fact, in note 14 to ‘Putnam’s Paradox’ Lewis says even if we bring in ‘naturalness’ we might not solve Quine’s problem. (for lewis naturalness actually enters into the picture twice – in his humanity constraint on the interpretation of thought and in ruling out deviant grammars for the language, and it’s really hard to see how all the details go.) maybe we should be even be open to a dennett-style instrumentalist view of *thought* – although i’m not sure i understand dennett’s view. but – and here’s my small point – lots of us think experiences (in humans and animals) have contents too – and some of us even think they determine phenomenology. and these views seem to me pretty hard to swallow for the content of experience.

  64. kenneth aizawa

    Hi, Tony,

    Now, in truth, unlike you.  I don’t really have a view on what to do about the specificity problem.  I’ve just seen that there is this big theoretical problem for informational semantics and teleosemantics.  I have no idea how to solve it.  And, I don’t see that it needs more folks piling on.  So, I’ve just moved on.  But, I was never really invested in this literature in a big way.

    Ken

  65. kenneth aizawa

    So, let me redirect this.  Here is this case from the psychological literature linking facial symmetry and fitness.  What are the implications of these moves for this account?  Have I misread the account or are the psychologists wrong or something else?

  66. kenneth aizawa

    Well, I think you need to distinguish between a research program having a serious theoretical problem with a research program having (lots of) folks working on it.  Ecological psychology seems to me to have serious theoretical problems, but there are plenty of folks working on it.  And, teleosemantics could be like that.  Theoretical problems with undaunted investigators.

    I think it is a bit simplistic to say that Fodor’s stature, or whatever, encouraged a hasty abandonment of teleosemantics.  Lots of different people have had objections about content assignment problems. It’s a little cottage industry in which philosophers of many different stripes can participate. Neander, in “Content for Cognitive Science”, hints at there being a lot of different takes just on the frog’s eye case.  I’m not a Rutgers student.  Nor a Fodor student, Dretske student, Millikan student, or Searle student.  I just read this stuff myself and I can see the problem that many folks have been talking about.

  67. Re Dan’s last point, if we concede that ‘tracking fitness’ amounts to no more than:

    P(This is a good reproductive partner | Her facial traits are symmetrical) > P(This is a good reproductive partner)

    (that is, a conditional probability in the LHS and an unconditional one in the RHS) Then facial symmetry does seem to track reproductive fitness; i.e., the inequality seems to hold. To be sure, there are more stringent unpackings of the notion of tracking, but let us concede that this is the relevant one.

    The *proximate explanation* move allows you to keep this liberal notion of tracking, and denying that tracking fitness is the relevant explanatory ingredient for content-fixing purposes. That was, at least, Millikan’s original idea.

  68. David Pereplyotchik

    The remark from Josiah Royce is indeed terribly implausible, but it’s no consequence of holism.

    Anyone who thinks holism entails that sticking your hand in a tiger cage requires not knowing the meaning of the word ‘tiger’ simply hasn’t understood holism.

    Indeed, I’d say that anyone who thinks that there is such a thing as THE meaning of THE word ‘tiger’ has failed to grasp the holist’s position.

    A person might have any number of reasons for sticking their hand in a tiger cage, knowing full well that it’s dangerous. Or they might be ignorant of the danger, but nevertheless fully aware of many other interesting and important things about tigers.

    To the extent that I can rationalize the person’s behavior with respect to the tiger, communicate fluidly with him, and coordinate our common plans in a way that strikes us both as successful, I will take his uses of ‘tiger’ to mean something similar to what mine do.

    Do his uses of ‘tiger’ mean *exactly* the same as mine do? Almost certainly not. Must they, in order for us to successfully communicate and coordinate?Absolutely not! Until and unless the differences between our functional roles for ‘tiger’ hamper our communication and coordination, we simply ignore them.

    That’s no different from ignoring a great many other details about the world that are irrelevant for one’s practical purposes. Is this chair that I’m sitting on “really the same chair” as the one I sat on yesterday? What if a tiny piece has been chipped off, or perhaps just a molecule? If the chair’s precise molecular constitution matters to you, then you’ll say it’s not the same chiar. But you’d have to be a pretty unusual person to care about that sort of thing. For most (though not all) intents and purposes, it’s the same chair and anyone who says otherwise is a bit weird.

    It’s like that with meanings. We tolerate irrelevant differences. When precision matters, we calibrate with one another.

    How relevant is the difference between me and the guy who’s sticking his hand in the tiger cage because he’s ignorant about the dangers that tigers pose? It depends. The difference might cause us some trouble down the line. (“Hey, man, wanna come with me to Vegas, get drunk, and play with tigers?!?”) But it very well might not matter. Suppose that, despite his unusual ignorance, this guy nevertheless knows a great deal about tiger fur and its various uses in exotic cultures. If he lectures on the topic, his claims about such matters might be very informative for me. I won’t take him to mean quite mean the same thing by ‘tiger’ as I do, of course, but the knowledge I gain from listening to him would be no less substantial. And the difference in meaning is, anyhow, no surprise.

    All in all, I’d say we need to get a bit clearer about what exactly “the usual holist downsides” are before we abandon ship and adopt theories that are potentially more problematic and implausible.

  69. Eric Thomson

    First, don’t even compare this to the Gibsonians. Them’s fighting words. 🙂

    Sexual selection is an interesting wrinkle for these types of issues. I can’t give a priori response; I’d need to know more about mechanism, development, phylogeny, and contribution to fitness that results from factors independent of sexual selection. If high symmetry is more fit simply because other sex likes it, and we have runaway sexual selection, that is different from being more fit for reasons independently of attractiveness to potential mates: e.g., an ability to better metabolize fats.

    Getting back to simpler examples with fewer free parameters: is it important that the frog’s visual system (or the bee’s dance) have unequivocal content (as expressed from the perspective of our relatively rich conceptual repertoire)? Do you have a strong intuition that it is highly specific the way (we think) our linguistic expressions are?  And is there an alternative way to think about the frog’s visual system that helps us understand what it is doing, how accurately it represents the environment, how it uses its visual system to navigate the world, etc?

    In practice, when I’m analyzing data from rat brains, thinking of them as information gatherers and consumers is extremely useful, and frankly it is hard to see how to do away with it without adopting something less helpful (this holds even if information isn’t the whole story, as everyone since Dretske has acknowledged). Thinking of them informationally helps explain their behavioral accuracy and orients our thinking about what features of the neuronal spike trains to look at as potentially causally potent for downstream neurons (e.g., synchrony is highly informative: can it be extracted by the “consumer” networks downstream)?

    Ultimately, the conceptual scaffolding we build for flies, rats, bees, should extend partway to more complex creatures with complex conceptual systems with compositional semantics or whatever. To jettison the approach that clearly works so well in simpler organisms because it can’t immediately explain what we find in humans and human language seems an awful mistake.

  70. Hey Adam.

    Nice pointer on the Lewis. And I admit that I’m not sure I get Dennett’s view, either!

    About experiences, I wonder if the determinancy here need be particularly fine-grained (or fine-grained in the same way as with beliefs). There does seem to be a determinacy to sensations–RED17 as opposed to just RED, for example. I think something along the lines of Austen Clark’s quality space view (1993) can handle that sort of thing. But the idea that phenomenology itself (rather than phenomenology plus cognition) is contentful in some more fine-grained way seems too strong to me (maybe this isn’t what you’re suggesting?). Do my sensations alone determine whether the content of my perception is of a HORSE rather than a COW-ON-A-DARK-NIGHT? Or is it that I have a multiply-interpretable sensory experience that I take to be about HORSES, given my background beliefs, theories, past evidence, etc.? This interpretation is automatic, generally, (I assume), but the point is the determinate content may not be in the experience alone.

    (Thanks for you comment–this reply is rather off the cuff, I’m afraid!)

  71. Ken–

    Fair enough. My feeling was that looking at the work by Dan, Manolo, John D., Eric, etc. suggests that there may be ways around the serious theoretical problem. (Plus, I’ve been urging that perhaps the problem may be less serious than it’s widely taken to be, though admittedly that was mainly hand-waving.) In any event, my intuition was in the direction of failure when I came in, and now I’m less sure.

    I regard Fodor’s work (recent stuff on Darwin excluded) to be generally top-shelf–I think his stature is well-deserved, for what that’s worth. But he does have a way of setting the boundaries of a debate and at times this can distort things, I think. (Is this really “the only game in town”?) I, too, am not a Rutgers, Fodor, Dretske, etc. student, and I certainly didn’t mean to suggest that by thinking these problems are serious that we’re all Fodorian dupes! 😉 But I have this nagging feeling that the determinancy bar has been raised higher than it needs to be, and I sneaking suspicion that Fodor’s “A Theory of Content” had something to do with it.

    But I take your point–the worries have their standing with or without Fodor. I thought you put them very clearly above.

    Cheers.

  72. Ah! I’ve been confused. I thought your case, Ken, was an example of the general fact that any functional detection system (or any functional trait at all!) ultimately serves to increase the fitness of the species harbouring that detection system… but you were talking about the fitness of the symmetrical mate detected! Sorry. Yes, the symmetry detection system may well track mate fitness, and it may even have the function of doing so. (Though I doubt it – it seems that the utility of symmetry detection goes far beyond mate selection, and so probably has a range of selectional explanations.)

    I take it that your argument goes something like this:
    1) We know introspectively that when we experience visual symmetry (e.g. in search tasks) we’re not representing mate fitness.
    2) Millikan’s account says that the symmetry detector’s representations mean mate fitness.
    3) The symmetry detector’s representations are responsible for our experience of visual symmetry.
    Therefore Millikan’s account can’t be the correct account of the symmetry detector’s content.

    Two things I’d want to say here, even granting (2): First, with respect to (1) I doubt very much that we can know all that much about perceptual content introspectively. Second (and this has come up already), there’s a serious problem with (3). There’s a lot going on in the case, with lots of neural subsystems getting in on the act. For instance, there may be a subcortical symmetry detection system operation, which increases the salience of symmetrical objects for further processing by cortex. So these subcortical mechanisms may contain representations that mean “fit mate”, but that doesn’t mean the (conscious) cortical representations do. (I note in this connection that symmetry detection seems operative pre-attentively/pre-consciously, e.g. in hemi-neglect patients.) Or even if there is some specialized machinery built into cortex which was selected for fit mate detection, that doesn’t mean the cortical perceptual representation doesn’t *also* mean symmetry, in virtue of other more general cortical functions. Functional overlap is very common biologically.

  73. Steven

    Your response to the Royce provocation is very much on target: successful communication and coordination of common activities provide the key tests for determining whether the content of your ‘tiger’ utterances and thoughts is sufficiently similar to the content of mine. But this account does put some stress on your response to the issue of non-linguistic creatures, since we don’t communicate with them linguistically and rarely coordinate activities with them (the exceptions being very domesticated animals like dogs). Cognitive ethology is devoted to bridging that hermeneutic gap, so there’s clearly a scientific research program working along the lines you propose. I don’t see Dretske’s teleosemantics as in any way opposed to this sort of approach, particularly when it comes to the question of determining whether an organism’s behavior is sufficiently flexible to support attribution of something like beliefs/desires.
    That said, one might argue that while communication and coordination are good tests for content identity/similarity, they do not constitutively determine what the content of a mental representation in fact is. To understand why Dretske would offer a causal (or input only) answer to the question of what constitutes the intentional content of a mental representation, I think we’d have to consider the fact that, at least at the time, ‘direct reference’ theories were in the ascendency, and ‘descriptivist’ accounts of reference fixation had fallen from philosophical favor (for ‘holist downsides’, see Kripke).
    Could the purported demise of teleosemantics be just a symptom of a more general dissatisfaction with direct reference theories?

  74. Yup, this is right. According to Millikan, the most proximate Normal explanation is the least detailed explanation that cites structural features of the mechanism and local conditions, adds natural laws, and thereby predicts the performance of the mechanism’s function (thus explaining the selection of the mechanism). For Ken’s case, we’d have to know what downstream system is consuming the representations and what structural features it has, as well as the structural features of the symmetry detector and how it manages to effect a correspondence – and how all this figures in a selectional explanation. This means we don’t know remotely enough about the details in order to for the case to work as a counterexample.

  75. hey josh, thanks for the follow-up. hmm, i didn’t mean to suggest “my sensations alone determine whether the content of my perception is of a HORSE rather than a COW-ON-A-DARK-NIGHT”. i just suggested dennett’s instrumentalist-irrealist view (which you were suggesting as some kind of fall-back if the naturalization program should fail) is not plausible for the content of experience. likewise some radical indeterminacy view doesn’t seem plausible at all in the case of experience. some say that it’s indeterminate whether your ‘rabbit’-thoughts are about rabbits vs . . . all the other usual alternatives – we just have to live with that. but it would seem quite implausible to suggest that, as you look at a tomato, it is indeterminate whether yr exp. represents it as having the property of being round or the property of being round*, where the latter property has the same extension as the former as regards objects we might encounter, but otherwise differs radically in extension. likewise it would be wrong to say that the disjunction problem just can’t be solved in this case. surely my experience doesn’t represent the object as round or elliptical-but-cleverly-made-to-look-round, etc. (here i’m reacting to your remark ‘Maybe a workable natural notion of content does not need to solve the disjunction problem’.) similar remarks apply to the content of color experience. you say you think clark’s view handles this sort of thing. however i don’t remember him giving a general account of how experiences get their contents in the way, say, tye and dretske do. he does offer something that’s sometimes called ‘relationalism’, but (to be honest) i have a hard time getting it, although i find it interesting. (i’m not alone – i agree with much of what is said in the papers by rey, levine and matthen in the philpsy symposium on clark’s book, and in fact think that if anything those papers treat clark’s proposal as more clear than it actually is.) but, in any case, that subject’s too far afield!

  76. kenneth aizawa

    Well, at least you guys know that there is this problem with getting the contents right with teleosemantics.  Gibsonians are in denial about, for just one example, the challenge they face in dealing with visual illusions.  Their line: Illusions? What illusions? 

    I’m fine by working with simpler examples, but I’m indulging Dan’s claim that we should only look at human mental representation.  Maybe you should have a fight with Dan about that.  But, in any event, Neander reports that teleosemanticists are all over the map about the fly/black dot case.  Maybe that’s not simple enough.  Or maybe there’s confusion about the science.  But, it may be a bit optimistic to think that simply knowing more of the science will solve the philosophy problem.

  77. kenneth aizawa

    Hi, Josh,

    In truth, there seem to me to be different types of content assignment problems.  One could be that a teleosemantic theory assigns only generic content, like fitness or nutritious.  Another kind of problem is that a teleosemantic theory assigns the wrong content.  And, these types of problems can overlap it seems.  Maybe a teleosemantic theory that had the consequence that John is thinking “that’s food” is both too generic and wrong.  Instead, John is thinking “that’s pizza”.  So, even if we conceded that thought is to some degree vague or ambiguous (maybe I really am thinking of undetached rabbits parts), that would not eliminate the putative problems of incorrect content assignments.  That sound right?

  78. kenneth aizawa

    Hi, Dan,

    Yes, the scenario I describe is supposed to work on John having a representation of Jane’s fitness.  It’s supposed to be just like the beaver case, the magnetosome case, and the fly case.  From the secondary lit, it seems that it is also like Pietroski’s snorf case (only I’m thinking it is less easy to bite the bullet about what the snorf represent).  Or is it that kiwi that represent the snorf or red?

    Anyway, I think you give an uncharitable interpretation of what I would put in premise 1).  I need only that John does not represent Jane’s fitness.  Dump all the introspective stuff.  So, your objection to introspection is orthogonal to what I need.

    Given all the unknown details of the symmetry/fitness case you want to suspend judgment on it.  So, likewise, you will suspend judgment on Millikan’s beaver tail splashing case, right? 

    I’m not conceding that all these extra details are going to help you out.  The “nested” or “layered” consumers story I’ve told still seems ok to me.

  79. kenneth aizawa

    “it seems that the utility of symmetry detection goes far beyond mate selection,”

    Well, technically, it’s probably facial symmetry, which may not go as far as mere symmetry beyond mate selection. And, for all I know. it could be an even more restrictive kind of symmetry, e.g. distance of the eyes from the midline of the nose. 

  80. Eric Thomson

    Kenneth said:
    But, it may be a bit optimistic to think that simply knowing more of the science will solve the philosophy problem.

    I think the science is likely necessary, not sufficient, for addressing the philosophical questions. How much science is necessary?  Do we  have enough science done on the frog visual system for the philosophers to fight about that in a fruitful way, or at least to fruitfully argue about the possibilities available? E.g., do frog visual systems have dedicated object-tracking systems? Or can we even talk about object representation in the frog/toad visual system?

    I’m no expert on frog/toad vision so should be cautious: based on every other system I have studied, it is likely we actually know very little of the details that some philosophers might think are relevant. Maybe aplysia would be a better model system for philosophers to fight about. Or the leech. Or the honey bee.

    This is the problem for philosophers: they want the answers before the science is close to being done. You know what you end up with when you pull a cake out of the oven halfway through cooking: a half-baked cake. I’m not saying stop doing philosophy and become scientists (unless you are an undergrad trying to decide on a career), but to just keep in mind you are like philosophers of space and time before Newton (much less Einstein). You have some great clever arguments and ideas, but are in such an empirically impoverished state that most of what you say will end up looking either antiquated or lucky. That’s the risk of becoming a philosopher.

    One mistake, even in the debates among neuroscientists, is thinking that there is a single answer that applies to every system. It’s helpful to take a step back every now and then and think about things from the perspective of the brain stem (if you study cortex), or of the bee (if you study vertebrates), or the rat (if you study invertebrates).

    Another mistake, one I am often guilty of, is thinking we shouldn’t think about complex systems before we’ve done the work on simpler systems. That also is false. We need to study all systems in parallel, and hope some of the ideas work together.

    A final somewhat unrelated concern that philosophers specifically should probably worry about: when we (physiologists) talk about the function of X, we basically never look into its phylogeny. We look into its role in the system for carrying out some task we think is important. E.g., visual perception, digestion, getting oxygen here or there. For all we care, there is no evolution when we ascribe function to parts. Perhaps we are just lazy or naive, but my hunch is that we are not. In theoretical discussions of function-ascription, I rarely (if ever) see mention of this facet of the practice of your workaday biologist. Basically, we act more like Cummins than Wright/Millikan. (Note I’m not assuming there is a univocal way to assign function, I think I’m a pluralist).

  81. djc

    Sorry I’m late to this thread. My recollection from postpoll discussion at the Institute was that those who voted “Searle” (over choices including Fodor, Dennett, Dretske, Millikan, and maybe one or two others) were in effect voting for “give consciousness a central role in a theory of intentionality”. Of course that sentiment mirrors the guiding idea of the recently popular phenomenal intentionality research program, which some see as a successor to the old naturalization program (see e.g. https://uriahkriegel.com/downloads/PIRP.pdf).

    On the sociology, for what it is worth: my sense is that the view attributed to the three “Rutgers people” in the initial post captures the received wisdom in many parts of the profession, at least concerning causal and teleological approaches to psychosemantics. Of course received wisdom is often wrong; and even the received wisdom here allows that some future approach to psychosemantics might succeed. I don’t think there’s any received wisdom as to what is missing, but certainly one strand of thinking (with which I’m sympathetic) is that any successful psychosemantics, even externalist psychosemantics, must put more weight than these programs on internal factors such as inferential role (and perhaps phenomenology).

    In any case I think there’s a sense in the air that perhaps the time is approaching to try again, perhaps looking in more depth at alternative approaches or combined approaches. Ted Sider and I ran a seminar last fall at NYU on “The Grounds of Intentionality”, mainly focusing on such approaches, e.g. inferentialist, phenomenological, normative/social, deflationary, and reference-magnet approaches, as well as a little on teleological approaches.

  82. Adam–

    I tend to agree with you on instrumentalism/irrealism about sensations. I do think there’s something to the Clark here (and related sorts of relational quality space views) but that’s a longer story, obviously. In any event, it’s more realist than Dennett (if it works!).

    About tomato experiences–why think a frog’s tomato experience represents in a way that makes these distinctions?

    In us, it might be that we interpret our experience as picking out the one rather than all the alternatives because our language allow us to carve things up in this fine-grained way and then (implicitly) theorize about what is indicated.

    In any event, don’t things *look* the same in cases where it’s a tomato and where it’s one of the funky alternatives? Tomatoes look just like (tomatoes-or-X), given that the first disjunct is present. (Or am I completely missing the point here?) Anyway, I don’t see why we need to locate the determinancy in the experience itself.

  83. Ken–

    That seems right to me. And that (it seems to me) leaves it open that teleosemantics might solve the problems of incorrect content you mention while leaving the residual vagueness or ambiguity–that is, without solving Quine’s problem.

    Thanks–great posts throughout this discussion!

  84. Hi Ken,

    Despite being Canadian, I don’t know that much about beavers – but I would imagine “we” have enough details to be sure that Millikan handles this case fine (if it matters). Remember that I’m agreeing with Manolo that, according to Millikan, we need the most proximate explanation for selection of the producer-consumer system (see my reply to Manolo above). This is also why the details matter. You haven’t addressed the point about proximate explanation.

    About your new premise (1): how would you defend it? By appeal to what the psychologists say? Why think they know anything about the semantics, rather than the information flow?

    (Yeah, it’s the kimu that represents either absence-of-snorfs, or red, or….)

  85. Hi Josh, thanks for the follow-up. you ask ‘why think a frog’s tomato experience represents in a way that makes these distinctions?’. but i wasn’t talking about frogs (whose mental life i admit to not knowing much about). i said the example was about my own experience of a tomato. my small point was that the dennett view and the ‘we just can’t solve the problems [e. g. the disjunction problem] but that’s ok’ view you mentioned earlier on seem especially implausible in this case.

    you say ‘Tomatoes look just like (tomatoes-or-X), given that the first disjunct is present’. i guess i don’t follow the argument here (my fault). for one thing, even when this is a tomato, ‘this looks like a tomato or a wallet’ (e. g.) strikes me as false.

    anyhow, i don’t think experiences have contents like *there’s a tomato* at all – i take a thin view as against a thick view. my point was that my experience represents the tomato as *round* – and not any of the deviant alternatives. i don’t know how to prove that. it just seems obvious. and i don’t see why one would doubt it. (and i don’t see any reason to say i think it is true because i have language; or that the experience itself doesn’t determinately posses the relevant content, just the the language i use to describe it.)

  86. kenneth aizawa

    Ok.  I would have thought that the conclusion to draw about Millikan’s silence on neuroscientific details is that they don’t matter.  But, you apparently double down and say they do.  Ok.

    I think that the beaver sort of case is actually just one of a myriad in the animal communication literature and the kinds of scientists who do the animal communication stuff and not the kinds of scientists who do neuroscience.  There seem to be tons of animal communication studies that go beyond what is followed up with neuroscience.  There are studies of birds, primates, frogs, bats, etc., etc.  These fields are rife with talk of information and meaning and that talk seems not to turn on  neuroscientific details.  So, you are proposing to correct the scientific practice, right?

    So, premise 1) has always been something like: John does not represent Jane’s reproductive fitness.

    You want to deny this?  That seems to me to be a pretty safe premise.  But, ok.  Why isn’t it pretty plausible to say that reproductive fitness is a scientific concept that John does not have?  John, let’s say, went to school in Louisiana.

  87. kenneth aizawa

    No, I’m pretty sure that they are an interaction product, but that was not an “Biosemantics” condition. 

    I suppose that you must be right that this is a new condition, but I would guess that it’s going to make it pretty hard to get much to mean things by these conditions.  One might not be able to generate counterexamples, but then again one might not get these conditions to apply very widely.  You might block counterexamples at the price of vacuity.

    But, also, these sorts of conditions seem not to be in play in, for example, the animal communication literature.

  88. Eric Thomson

    Sometimes the neuro details are sometimes crucial, sometimes not. If you are you going for a generic categorical statement, that’s a mistake.

    If we are talking about the function of some neuronal region, then the neuronal details will tend to matter more. When discussing birdsong or beedance, they should matter less because we can do behavioral experiments: the function can be played with (at least partly) without going into brains. It’s like understanding the function of the bird wing without having to do much, if any, neuroscience. Fine by me. Same with the function of bile.

    But in your case we are talking about mechanisms for “detecting” symmetry. That’s a neural mechanism carrying it out, and for all we know it is a very high-level visual process. Is it produced by a dedicated circuit, or parasitic on more general pattern recognition machinery? Is this purely a case of sexual selection, or is symmetry a fitness-producer for other reasons? Does it evoke a more general ‘symmetry-is-aesthetically-pleasing’ response that we have even for symmetrical paintings? Is it learned or innate? These are the sorts of details for this case that are important. And because there are so many free parameters it seems a bad case to push so hard. There must be a simpler case to make your point where we wouldn’t need to get bogged down in all these details. I’m frankly not even sure what your point is, because my mind gets caught up in the number of relevant branches in the tree of possibilities with this phenomenon, branches that should be trimmed with data.

  89. It’s not a new condition, it’s just because we have to identify the right producers and consumers, the cooperation of which must be subject to selection. If the combination of symmetry detection plus face detection didn’t undergo *independent* selection for picking out fit mates (and their interaction-type isn’t the product of some *other* mechanism that underwent selection for picking out fit mates), the combination is not going to generate any new “fit mate” contents. Rather, the relevant contents will just be determined by the standard representations of the symmetry detector and face detector, taken separately.

    On the other hand, if the combination *has* been selected for in the right way – maybe there’s some extra neural machinery connecting the two detectors that increases their efficiency at detecting symmetrical faces, and this extra machinery has been selected for because the combined representation picks out fit mates – then you might have a case that there’s an extra “fit mate” content generated. But even this would just be an extra layer of content over top of the standard face and symmetry detector contents.

  90. Millikan certainly thinks that neuro details can matter. But like many psychologists and ethologists, she often assumes that there will be no surprises at the neuro level, after the psychology and ethology have been done, i.e. that there will be a neat mapping between the neuro and the theoretically posited independently selected-for mechanisms. (Also: what Eric said.)

    Generally when ethologists are talking about meaning, information and the like, they’re not intending that the theoretical notion implied is one that will apply also to human beliefs, desires etc. Whereas Millikan is trying to show how human mental representations belong to a very broad type that goes from very simple organisms to human language. Part of her aim is to increase understanding by demonstrating this theoretical unity – with much attention, of course, devoted to the differences within the broad type. She’s not trying to “correct” scientific practice; the project is different. (There will be an exchange with Michael Rescorla on this topic in Millikan & her Critics.)

    Myself, my main concern is with human mental representation, which I attacked via the neuroscientific route. It so happens that my model fits fairly well with Millikan’s broad theoretical view (not by design, I might add), although I’m mostly concerned with the distinguishing details of cortical representation. I say this because, while I think Millikan is right about a lot of things, I’m not personally invested in her huge theoretical project.

    Re: your version of premise 1: Yeah, Louisiana John doesn’t represent reproductive fitness at a conceptual level. But the psychologists are talking about a subpersonal perceptual mechanism, and concepts don’t automatically inherit the contents of the perceptual mechanisms that feed into them. You’d have to change premise (3) to: “The facial symmetry detector’s representations are responsible for the content of John’s conceptual response to the visual experience of facial symmetry” which is clearly unacceptable.

    This is now long enough (and skinny enough in threaded view) that I think I can be confident that there are no more than 2 people reading it. Alas.)

  91. I think I’m with you here, Josh – at least in being skeptical about our ability to know the contents of our perceptual states. I agree, Adam, that we should reject (most of?) the so-far mentioned deviant alternatives, but when you say “it just seems obvious” that sounds suspiciously like you’re saying it’s *introspectively* obvious. Whereas I would suggest that, insofar as it seems obvious, this is motivated more by an impetus towards theoretical simplicity or something, rather than introspection. But I also think it could be overturned.

    As a milder example, I think there are some nice cases from the ecological psychologists suggesting that the contents of our perceptual experiences can be surprising. One example: I could easily be convinced that when we lift an object, our somatosensory/proprioceptive experience doesn’t represent its *weight*, but rather its wieldability: https://www.springerlink.com/content/x6j689x04k20u40j/ . Similarly, I could be persuaded that our visual experiences don’t represent roundness.

    In addition, I agree with Josh that it’s hard to tell, within the many overlapping representational contents that may be present within a conscious experience, which of those are contributed by perception and which by more cognitive aspects.

    So while I obviously agree that a psychosemantics is greatly to be desired, I’m not sure conscious perception provides the strongest motivation for the project.

  92. Hey Adam.

    Sorry–the frog thing was referring back to the more general discussion. It’s not directly relevant to your point.

    About *tomato* and *tomato-or-wallet* Isn’t the satisfaction condition (or whatever) of the 2nd that either a tomato or a wallet is present? So when a tomato is present both are satisfied. But what does that look like? Well, that a tomato is present, in both cases. Or so I was thinking. Now, we believe that the content is the non-disjunctive one, but is that present in the experience itself? Is it something phenomenological? I don’t see that (barring some sort of cognitive phenomenology, if there is any). I take it something up-stream like inferential role does the disambiguating.

    About the thin view (speculative and impressionistic–my apologies): On Clark’s view (following Sellars), we present subjects with perceptual stimuli and map out the space of their ‘just noticeable differences.’ On this basis, we posit an internal quality space in subjects’ minds with a homomorphic structure (this minimally explains how they can detect the JNDs). This gives us a derivative notion of particular sensory qualities that is thin–*round* is the location in the mental visual quality space corresponding to the location of round in the perceptual (worldly) quality space. (NB: This won’t give us qualia, if that requires intrinsic qualities. But I don’t see how that matters here.)

    Now (assuming that made any sense at all ;)), an experience might represent *round* in the quality space way, but I’m not sure that rules out some of these logical or sub-extensional deviants. The best the quality space thing can do is what shows up in JNDs, and I don’t see how this rules out the weird stuff because I can’t see how to test for those alternatives. But I’m not sure there’s any better behavioral test for the contents of experience.

    But then I wonder why we think the system is representing in a more fine-grained way–is it just that it seems that way from the first-person perspective? Maybe here’s a place to doubt the accuracy of the first-person perspective.

    The upshot is that the (sensory) content of experience may be realist, determinate, and thin down to the level provided by quality spaces and JNDs. Below that, I question the need for more determinate content. But why do we think that the content is more precise than what is required to play the right perceptual role (JNDs)? My Dennettian hunch: language comes in and allows us to make more precise distinctions, going beyond what is in the experience itself. (I guess all I’m saying that’s Dennettian at this point is that linguistically-infused cognition distorts our view of the precision of lower-level states and systems–no irrealism required, as of yet.)

    All right, this is way too long and loose–feel free to back away slowly! And thanks again for the helpful discussion.

    Cheers

  93. Anonymous

    Thanks. Josh: i think we agree (modulo ‘quality space theory’). you ask ‘why do we think that the content is more precise than what is required to play the right perceptual role (JNDs)?’. i’m not sure i get the question, but i’m also not sure i said it is more precise than that. i just said: it’s very hard to deny that, in some sense, the experience represents *round* – not some disjunction, or Quinean deviant thing. And with that you seem to agree.

    Dan: thanks for chiming in. yes, i think introspection provides some prima facie justification in some cases. but i think you might be reading more into my comments than is there. i didn’t say introspection can’t be overturned (i just said i see no reason to think it is in the tomato case). and i didn’t say introspection can tell us, in every case, what experience, as opposed to belief, is representing. of course that can be a difficult issue. and i didn’t say there cannot be surprising and interesting discoveries about what the contents of experience are. (otherwise i wouldn’t have written papers on this!)

  94. I think that’s right, Anonymous Adam. We’re pretty much on the same page here. Thanks again for the great discussion!

    Dan–Cool stuff! Very well put.

    (I look forward to the Millikan and her Critics thing–I saw Ken Williford recently and he was talking it up as well. Cheers!)

  95. But, also, these sorts of conditions seem not to be in play in, for example, the animal communication literature.

    About that, I am not sure of the weight one should give to the animal communication literature, when it comes to content attributions to animal mental states. Most of the times researchers just seem to be assuming the commonsense content attribution — which is ok, of course, but would hardly count as scientific evidence against a certain psychosemantic theory.

    Some other times they seem to be doing psychosemantics themselves, as, e.g., when they propose necessary and suffcient conditions for the existence of individual or kin recognition. I’m thinking of papers such as Steiger et al.’s (2008) ‘True’ and ‘untrue’ individual recognition’ in Trends Ecol. Evol. and the subsequent exchange with Tibbetts et al. In these papers, the necessary and sufficient conditions for individual recognition (in philosophese: for the postulation of mental states with individual-involving contents) are taken to be the presence of specificity in (one or more of) the cue, template and response that the animal exploits in dealing with a certain individual. This focus in discriminability or specificity of response is natural because it leads to testable content attributions, and they surely track the presence of IR in most cases. But they can’t be what *makes* a state individual-involving, as a number of relatively easy counterexamples show. E.g., researchers record the call of baby seals and play it back to adult individuals to observe their responses; this already shows that the probability of the presence of the cue conditional on the absence of the individual is, in many situations in which we attribute IR to a receiver, not zero; that is, cues in IR settings are non-specific.

    What I am suggesting is that the relation of teleosemantics with ethology it is not as analogous to, say, the philosophy of time with fundamental physics as one may think.

Comments are closed.

Back to Top