In our previous instalment to this blog series, we alluded to a subtle but pivotal adjustment that our Radically Enactive account of Cognition, REC, recommends making to what, in analytic circles, is the standard conception of minds. The recommendation is that we conceive of the intentional and phenomenal aspects of cognition –the signature marks of the mental– as fundamentally contentless in character. More pointedly, REC proposes that representational content is not an inherent feature of the intentional or the phenomenal.
Focusing on intentionality, REC holds that it is variegated – that it comes in a variety of forms and that in its more primitive form it lacks representational content, despite targeting certain worldly offerings. Of course, to maintain that intentionality is not always and everywhere permeated with representational content is not to deny that it can be. Thus, in proposing that basic forms of Ur-intentionality are contentless, REC only challenges those theories which assume that representational content is either a, or the, defining feature of intentionality. Thus, contra Searle (1983), REC holds that we should not model the properties of all varieties of intentional attitudes on the properties of contentful linguistic utterances and speech acts.
Looking at the larger history of attempts to explicate intentionality, it is possible to disentangle strands in Brentano’s classical formulation that, we contend, are unhelpfully run together by many philosophers and which fuel their intuitions that there can be no intentionality without representational content. Yet, if we relax such intuitions, there is ample conceptual room for thinking that there can be forms of intentionality which are nonrepresentational in character (Hutto and Myin 2017, Ch. 5). We concur with Muller’s (2014) assessment that we would benefit from a more “nuanced understanding of intentionality” (p. 170).
Yet not only it is possible to give a naturalistic account of contentless Ur-intentionality, naturalistic considerations push us in this direction anyway. As things stand, teleosemantic theories of representational content are the top choice for anyone who hopes to naturalize intentionality. Such theories seek to account for representational contents and the special norms they require using nothing more than an appeal to natural selection and evolutionary biology (Millikan 1984; 2005). Such theories of content are widely deemed by many to be ‘the most promising’ (Artiga 2011, 181), and for some clearly ‘the best’, if not the ‘only’, real options for naturalists (Rosenberg 2012, p. 4).
We concur. Still, REC recommends adopting a significantly modified teleosemantics: one stripped of its semantic ambitions, viz, the aspiration to provide a representational theory of content. Adjusted in this way teleosemantics converts into what we dub teleosemiotics – it yields a theory of Ur-intentionality; one that is suited to doing a different job than that for which teleosemantics is designed, viz, accounting for the most basic, non-semantic forms of world-involving cognition that, we contend, lie at the roots of cognition.
Why tamper with teleosemantic theories? Notoriously, despite their many attractive features, classic teleosemantic theories encounter a serious problem when it comes to assigning determinate contents to mental representations. For how, invoking the favorite example, is it possible to determine the content of the mental representations that are purportedly involved in a frog’s tongue snapping behavior if we rely only on selectionist explanations,? Is the frog representing the object of its interest as a fly, as a small moving black dot, as a nutritious object, or as food (Fodor 1990, 2008)? In a seminal article, Goode and Griffiths (1995) argue that these descriptions are complementary, not competing and that, as such, if we were to use them in combination to assign a content to the fly’s representation it would be an inclusive list of all of the following: ‘small, dark and moving, nutritious, fitness enhancing, fly’.
Yet even if it proves possible to assign contents to mental representations in a principled manner in this way, the etiological character of standard teleological theories that helps to secure this result ensures that content is non-causal. That is a feature, not a bug, of teleological theories so long as we only look to content attributions for answers to why- and not how- questions.
There is a more fundamental concern. For the mere fact that basic forms of cognitive activity can be ascribed contents on the basis of evolutionary considerations does not make it the case the such ascriptions are explanatorily necessary or warranted. The sort of targeted cognitive activity that is merely sensitive to structural covariances in the environment because of having a particular selectionist history can be wholly accounted for without assuming such cognition is contentful. Biological normativity does not equate to norms of truth or accuracy. On this analysis, being sensitive to and targeting specific worldly offerings may be a necessary platform for acquiring contentful attitudes but it does not suffice for having such attitudes.
All of the concerns about teleosemantics vanish entirely if we think of basic intentionality as having targets but not as specifying those targets in contentful ways such that the question of their truth or accuracy could arise. In the end REC proposes a position that is akin to the one Sachs (2012) advocates. Sachs maintains that it is only with respect to special forms of non-basic cognition that it makes sense to talk of content and appropriate to “distinguish between sense and reference” (Sachs 2012 p. 145).
REC appeals to teleological theories in order to explain how basic forms of cognition which target worldly offerings could have arisen naturally. It assumes that the non-accidental success of such cognitive activity is explained by certain correspondences holding between states of the organism and states of the environment. Yet it agrees with critics of classic teleosemantics that there is no evident explanatory advantage or obvious justification for thinking that biologically basic forms of cognitive activity require states of mind with representational content (Burge 2010, Rescorla 2012, Price, 2013).
Is REC’s account UR-intentionality too thin and colourless? The REC account of Ur-intentionality leaves room for worldly offerings to be experienced under aspects – the things creatures engage with can look or feel as certain way. But, we hold, such phenomenally charged ways of experiencing things do not entail representational contents (See Hutto 2006; Hutto and Myin 2013 Ch. 8 for more detail on REC’s take on phenomenality; see also Raleigh 2013).
Finally, it may be thought that Ur-intentionality is thin in another sense – namely, that if Ur-intentionality is contentless then its alleged intentional properties reduce to nothing more than the properties of a stimulus-response mechanism. But this disastrous consequence would only follow if Intellectualism or Mechanism are the only live options. Call this Fulda’s Fork, and as Fulda (2017) nicely demonstrates these are not our only theoretical choices – there is space between these poles for living, adaptive agents that are not slavish responders and yet which get by without representing the worldly offerings with which they actively engage.
References:
Artiga, M. 2011. On Several Misuses of Sober’s Selection for/Selection of Distinction. Topoi. 30:181–193
Burge, T. 2010. The Origins of Objectivity. Oxford University Press.
Fodor, J. A. 1990. A Theory of Content and Other Essays. MIT Press.
Fodor, J. A. 2008. Against Darwinism. Mind & Language 23:1–24.
Fulda, F.C. 2017. Natural agency: The case of bacterial cognition. Journal of the American Philosophical Association. 3:1. 69–90
Goode R and Griffiths P.E. 1995. The misuse of Sober’s selection of/selection for distinction. Biology and Philosophy. 10:99–108
Hutto, D. D. 2006. Unprincipled engagements: Emotional experience, expression and response. In Consciousness and Emotion: Special Issue on Radical Enactivism, Menary, R. (ed). Amsterdam: John Benjamins.
Hutto, D. D. and Myin, E. 2013. Radicalizing enactivism: Basic minds without content. Cambridge, MA: MIT Press.
Hutto, D. D. and Myin, E. 2017. Evolving enactivism: Basic minds meet content. Cambridge, MA: MIT Press.
Millikan, R. G. 1984. Language, Thought, and Other Biological Categories. MIT Press.
Millikan, R. G. 2005. Language: A Biological Model. Oxford University Press.
Muller, H. D. 2014. Naturalism and intentionality. In Contemporary Philosophical Naturalism and Its Implications, ed. B. Bashour and H. D. Muller. London: Routledge.
Price, H. 2013. Expressivism, Pragmatism and Representationalism. Cambridge University Press.
Raleigh, T. 2013, Phenomenology without Representation, European Journal of Philosophy 23 (4), 1209–1237.
Rescorla, M. 2012. Millikan on honeybee navigation and communication. In Millikan and Her Critics, ed. D. Ryder, J. Kingsbury, and K. Williford. Oxford: Blackwell-Wiley.
Rosenberg, A. 2013. How Jerry Fodor slid down the slippery slope to anti-Darwinism, and how we can avoid the same fate. European Journal of Philosophy of Science. 3:1–17.
Sachs, C.B. 2012. Resisting the Disenchantment of Nature: McDowell and the Question of Animal Minds. Inquiry. 55:2. 131-147
Searle, J. 1983. Intentionality: An Essay in the Philosophy of Mind. Cambridge University Press.
seems to be in the air:
https://rsbakker.wordpress.com/2018/01/23/flies-frogs-and-fishhooks/
Thanks for writing up this summary. I am worried you are trying to throw out something that is as innocuous as it is ubiquitous and useful in neuroscience/cognitive science. It seems there is a lot of explanatory power you lose when you jettison content. Neander’s recent book does a great job explicating this. Not to put too fine a point on it: to my ear as a neuroscientist, it is like someone coming and telling me not to call a map of New York City a representation. Of *course* I will call it a representation! It is an information-rich map of the world that I use to steer about my environment, and it *can be more or less accurate*.
Let’s consider birdsong learning. A fledgling birdsong spends time with an adult bird that sings a particular song (the ‘tutor’ song). The fledgling can then can go months without vocalizing after this sensory learning period. Then, during the sensorimotor learning period, it will start to sing. It will be horrible at first, sort of like a baby babbling. However, its song slowly come to match the tutor song, via an error-guided feedback process. The birds listen to how closely their self-generated song matches the tutor song, and slowly come to match the two. If they are deafened they do not ever produce the correct song.
How do you tell a story about this without content? There is a memory of a specific song stored in the brain of the animal (it is not a random song, after all: it is specific). Neuroscientists who study this talk about the representation of the song, or the template that is stored. The *content* of this representation is the sensory information about the tutor song. They are able to learn because the brain is calculating an error signal, or difference between the currently produced song and the stored representation, in real time.
This is not some weird example, it is a general problem for recognition memory (like when your dog recognizes you, versus the postal worker walking by).
Similar, with working memory tasks, when I am remembering that a red square was shown, and need to maintain this information in order to push the right button, there is stimulus-specific information maintained in neuronal activation patterns, and this is required to solve the task. If the wrong information was there, then you would press the wrong button. These information-bearing representational states explain my ability to perform the task.
The notion of representation at play in examples like these, which could be multiplied, seem explanatorily potent, useful, ubiquitous, and not harmful (i.e., nobody is commiting to propositional attitudes or language of thought or anything weird like that).
I agree that this sense of representation has limitations (e.g., it leaves you with coextensional contents like square/four-sided polygon). And if you wanted to argue that these limitations would be overcome with semantic resources provided by social discursive practices/norms/inference or whatever, then that is certainly a conversation worth having. But to suggest that there is literally *zero* content, zero possibility for error, etc., seems a mistake.
where and how is this “stored” in the brain, isn’t it just that certain tendencies/response-abilities have been ‘wired’ into the neuroanatomy,where/what is the map/song-book/tape-loop?
dmf this is a great question: disentangling motor and sensory storage is not easy, and is something that experimentalists spend a lot of time on.
For instance in working memory tasks there are versions where the subject doesn’t know the right motor response until after the delay period has ended, so the only specific information that has to be stored is the sensory information.
With birdsong learning, this is still extremely active as a research topic, but the best evidence I have seen is that they have stored a sensory template. Their motor skills are horrible at first and only very slowly come to match what seems to be a much more accurate sensory representation, learned via the auditory system, which is used as the basis for the error signal. The fact that they cannot learn to sing the tutor song when deafened suggests the error signal is calculated from sensory, not motor (efference copy) comparisons.
More directly, there are neurons that respond to the playback of the tutor song in HVC, when the bird is not vocalizing (Richard Mooney’s work), and this region is key for learning (for instance, see Mooney’s paper ‘Auditory representation of the vocal repertoire in a songbird with multiple song types’).
So, while it seems mostly sensory in nature, let’s say it was a motor engram that was tucked away in the bird’s brain those weeks between the sensory learning and sensorimotor learning periods: I would say in that case we have a motor representation. That is a related but different topic, and we could talk about that, but I don’t want to keep going on too long.
(Note I am not claiming that memory is sufficient or necessary for representation: I think we need to look on a case-by-case basis.)
thanks for the reply but I don’t understand what ‘engram’ is here, I can say show you where the body say stores fats and how in those fats there might be stores of certain foreign chemicals but when I look at the anatomy in cases say with seeing/recognizing faces or the like is there some thing/material to be found in the neuroanatomy or do we just have new configurations/connections? This seems like reification to me if we don’t can’t point to where some thing is and what it is composed of.
dmf: with working memory there are neurons with sustained activity during the task, tuned to specific sensory parameters and not motor parameters. This is basically a map of the recent past used to guide behavior in the task. I see the evidence as fairly direct for working memory.
In birdsong there are nuclei whose neurons acquire and maintain the ability to respond to the tutor song during the sensory learning phase (e.g., ‘Auditory experience-dependent cortical circuit shaping for memory formation in bird song learning’). This is before the animals sing, or have any ability to sing well: their perceptual learning drastically outstrips their motor abilities. Again, this suggests they have learned a sensory template, not a detailed motor program. The sensorimotor phase of learning, clearly is a…sensorimotor phenomenon where they try out different songs, as their motor system (e.g., basal ganglia) finally gets trained up, via reinforcement learning mechanisms (using the internal sensory template as a guide) to hit the right notes.
The mechanism seems to be some combination of synaptic weight changes and changes in density of dendritic spines. For instance, if you block synaptic plasticity specifically during the critical period for songbird learning, then songbird learning is disrupted (e.g., ‘Developmentally Restricted Synaptic Plasticity in a Songbird Nucleus Required for Song Learning’).
I would actually disagree that we need to know the exact mechanism to say that there is information storage/retrieval. There are young kids who know that their computer stores their images in the cloud, and for all they know, it is done in a literal cloud.
The question of the justification of representational claims is a complex one. Obviously, directly observing the mechanisms is the gold standard, but it isn’t always necessary.
This is Erik Myin speaking. Thanks to all for the comments, on this and the other blogs. We hope to get back with some more reactions asap. For starters, a few points about the birdsong discussion between Eric Thomson and dmf. Clearly we share the perspective taken by dmf . There are structural changes in the learner bird, causally effected by the interactions with the peer in the perceptual stage, and having effects on the learning in the learning stage, but nothing in telling this story requires contentful representation or storage and retrieval of information. Systematic covariation or structural correspondence between the environment and features of the learner are all one has to appeal to explain the phenomena. As far as the relevant changes in the learning bird’s brain are concerned, it’s a case of specificity without specification (credits to Ludger van Dijk for this phrase). And I’d like to add that the “sensory” versus “motor” difference is irrelevant to the question whether what’s crucially involved is information storage or wiring. Sensory learning—e.g. your dog acquiring recognitional memory—seems to be matter of wiring or acquiring capacities just as much of motor learning. Nothing of explanatory value is added by describing “some combination of synaptic weight changes and changes in density of dendritic spines” in terms of the acquisition of to-be-redeployed information.
Erik wrote:
“[B]but nothing in telling this story requires contentful representation or storage and retrieval of information. Systematic covariation or structural correspondence between the environment and features of the learner are all one has to appeal to explain the phenomena. ”
Systematic covariation *is* information. That’s the point: you cannot explain birdsong learning without it. Further, the sensorimotor learning phase, where they compare their current song to the stored template, and use the error signal to drive learning, is where that stored template has clear explanatory bite.
As you point out, information in the context of a systematic mapping isn’t *sufficient* to be a representation (then a shadow of a tree would represent time). We need some kind of teleology there (and it should be used to control behavior).
In your next post, you wrote that appeal to biological function “is enough to account for the normativity of world-directed cognitive activity”, but then said that “more is needed to account for the existence for norms and standards required for there to be contentful representing”.
I would turn this around: those functionally-infused internal maps by means of which we steer are sufficient for contentful representations (just like my map of NYC), but I would stop short of saying they are “cognitive” in the sense that philosophers like to talk about cognitive representations (e.g., propositional representations).
ET, I think it is necessary to keep us from slipping into folk-psychologies which as you know often leads us astray, that neurophysiology can enact certain things is not to say that they have necessarily stored something, you can see this when folks like Q.Quriroga invoke “concepts” when all they have is neuronal firing, I say Occam’s Razor is a useful tool in keeping us from making too many leaps.
I completely agree that we need something like the distinction between “ur-intentionality” and “content.” In my own work I’ve put this in terms of a distinction between “discursive intentionality” and “somatic intentionality.” But there are important respects in which “ur-intentionality” is a better concept for what we’re talking about, because it would seem to lack the sense/reference distinction or (what might or might not be the same thing, depending on who you ask) the distinction between intentional acts and intentional objects.
However, I’m not entirely convinced that ur-intentionality allows us to dispense with the very idea of representational contents. The underlying question is (I think) this: do we need to give up on the very idea of representational content for cases where there is nothing at all like semantic or propositional content?
We can begin by paying attention to the importance of claiming as a social practice. Claims are striking because they have “objective purport”: they ‘aim at’ conveying how thing are, independent of the embodied/embedded perspective of the person making that claim. There’s clearly a need here for the kind of story about claiming that’s been developed by Pittsburgh School neopragmatists to connect with enactivism. (I tried doing that in my book, with very mixed success.) I won’t get into the details here, but I’d venture to say that enactivists are going to need a theory of content that’s much closer to Brandom than to Fodor, more pragmatist than cognitivist.
But with a Brandom-esque theory of content on the one hand, we can still ask whether discursively structured semantic contents are the only way of specifying what we’re talking about when we talk about representations.
If we think a bit about Huw Price’s distinction between i-representations and e-representations, and the quite different roles that those kinds of representation play (not even in the same box, as he puts it), then there’s no reason why philosophers should disallow neuroscientists from talking about representations — we can just add a little “e-” to all of their talk about representations when they develop theories of perception, memory, learning, and so on.
In other words, I think we should say that ur-intentionality is e-representational. And if we want to specify what the normative standard of e-representations is — what should an e-representation be doing? — then I think the answer is roughly that of Bruineberg and Rietveld (2014): maintaining an optimal grip in a field of affordances. (I would prefer satisficing over optimal but that’s a minor point.)
Maybe, then, we could flesh out the idea that ur-intentionality involves dynamic and transient e-representations to maintain a satisficing grip on a field of affordances, in contrast with the far more stable and secure i-representations that come on the scene when animals evolve the ability to play the game of giving and asking for reasons.
I am new to your site and new to ancient philosophy my google search led me here as i try to find out more about the origins of greetings and Hellos and goodbyes so far i am informed of the intention of others both cognitively and spiritually can you help with further information please for a novice?
i am new to philosophy my google search led me to your site as i am interested in the origins of greetings the hellos and good byes. i am aware (briefly ) of the term ‘ intentio’ relating to cognitive function wellbeing can you help me with further information ? thank you.