This post ends with a brief discussion about anxiety about the internal. I take that anxiety to arise when we see strong arguments for the idea that theories cannot successfully posit non-reducible mental states that provide distinctive causal explanations. The idea that the causal powers producing our beliefs, actions and emotions are series of neural excitations and nothing more can certainly make the mental look epiphenomenal. And it leads many, I think, to feel that if they cannot get states such as beliefs and desires into the head, then we are left with a bleak view of ourselves as little more than automatons.
We will start with a very brief summary of one point from the previous post. Then we will discuss radically difference senses of “representation,” one dominant in neuroscience and the other in contemporary philosophy of mind. We will next look at the argument that we do not have a successful theory giving us inner mental states with contents. This discussion leads into the discussion of the anxiety about the external.
We’ll end with the merest glimpsed of one topic for the next post. The glimpse is a photograph of the very naughty Rusty, the red panda that escaped from the national zoo.
The main theories of affordances we have looked at have the motivation for our movement coming in the first instance from the environment. A different approach would have the motivational power located in us. Supposing we grab at a piece of chocolate cake, the first kind of theory encourages the explanation that the cake called to us, presented itself as irresistible, etc. Theories of the second sort emphasize the internal state we brought to the cake as the motivations source. In the second case, it may be part of the story that one really longed for chocolate cake before one was even in its vicinity. But can we locate a desire for a piece of chocolate cake in the mechanism we looked at that depended largely on bursts of dopamine?
Desires are also thought to be mental representations. Before we address them directly, we should look at representations in cognitive neuroscience (CNS) and contemporary philosophy. The two fields, I want to suggest, are employing ‘representation’ in very different ways. Hence, the cognitive scientist’s insistence that there must representations of goals need not be denying Dreyfus’ claim that there need not be any such representations. Rather, they are speaking of different things.
CNS draws on a model of representations that has been around since the days of Aristotle. On this older model, to represent something is to realize it, exemplify it or even copy it. And exemplifying something or copying it is quite different from being about it, as the newer sense takes representations to be. Thus a copy – a color sample – can represent the color I painted my study, though the sample is hardly about my study. I might take a paper to represent my work at its best; the paper may not be about my work, but it exemplifies my best work. Similarly, a protest may represent something an administration feared would happen without being about their fears. The protest might instead be described as realizing their worst fears.
One place where we find the older model is in Aquinas’ philosophy. Uses of the model will vary with one’s ontology, and in Aquinas’ philosophy, we find a robust conception of matter and form. In his philosophy, then, perception and cognition about a cat involve realizing the cat in one by realizing the relevant sensible and intelligible species. In my next post here on the brains blog, we will see the results of interpreting his accounts of perception and cognition in terms of our very different contemporary philosophical notion of representation. For now, we can see the color of the cat and the essence of the cat as both forms which can be instantial in different objects. As I see the cat and recognize it as a cat, the sensible ‘spleckled grey’ gets realized in my sensory system, while the essence of the cat is realized in my intellect.
Neither of the realizations will mean we can find any physical trace of the cat in one’s head. Rather, the sensible and intelligible species are realized in ways that are specific to perception and cognition. (One might find this explanation circular.) How could Aquinas’ idea translate into the ontology and terminology of CNS? Oddly enough, a recent passage in an article about a new development in CNS could be designed to answer just this question. Thus Nikolaus Kriegeskorte, Mur, and Bandettini (2008) note:
Naively: The representation is a replication of the object, i.e., identical with it. (Problem: A chair does not fit into the human skull.) More reasonably, we may interpret first-order isomorphism as a mere similarity of some sort. For example a retinotopic representation of an image in V1 may emit no light, be smaller and distorted, but it does bear a topological similarity to the image. More cautiously, we could maintain that first order isomorphism requires only that the representation has properties (e.g., neuronal firing rates) that are related to properties of the objects represented (e.g., line orientation). While the naive interpretation is clearly untenable, the other interpretations are generally accepted in neuroscience. We concur with this widespread view, which motivates studies of stimulus selectivity at the level of single cells and brain regions. However, we feel that analysis of the second-order isomorphism (which can reflect a first-order isomorphism) is equally promising and offers a complementary higher-level functional perspective.
Notice that the passage takes the view of representation presented as standard in neuroscience. Nonetheless, we need to be careful about how we interpret the above passage. The isomorphism is not a simple structural isomorphism; rather, the idea is that the resulting representation will have a value for each causally registered feature. Which features actually get so registered is an important question for understanding perceptual experience.
The role of similarity is very prominent in a quickly developing area of cognitive neuroscience that seeks to bring together three major branches of systems neuroscience research: brain-activity measurement, behavioral measurement, and computational modeling (N. Kriegeskorte & Kievit, 2013; Laakso & Cottrell, 2000; Mur et al., 2013). The key to achieving a unified picture from such disparate sources resides in the similarity obtaining among the “representations.” What is looked for is an isomorphism between stimulus and initial responses, and then a second-order similarity between the responses as they evolve. The analysis we are looking at focuses at first on a kind of similarity among stimuli, but then moves to look at a second order similarity among neural reactions. In many theories, the second order similarity is given by looking at distance matrices, which tell us which reactions are differentiated responses to a stimulus. What is important to us is that the move from similarity of stimulus to similarity of reactions does not give us semantic content. A major part of the reason for this is that similarity does not give us bivalent satisfaction conditions.
I don’t have the space or the mathematical skills to elucidate the representational-similarity theory very fully, but it should be clear that it allows for what neuroscientists mean when they say things such as:
Research repeatedly points to the prefrontal cortex as a site where goals are formed, selected, and actively maintained. … Currently, our best idea is that goals are prepresented by recurrent patterns of neural activity that are stable for some time and can be distinguished from ongoing background activity. (Montague, 2007)
Goals realized in the brain are recurrent patterns.
There are some important questions we might ask of similarity accounts. One concerns truth. Similarity is ill-suited to give us bivalent truth-conditions. But many theorists think that getting to the truth is a major point of perception. The second concerns what we can call ‘individuation’. We might think of this second problem as coming in two forms. The first concerns the definiteness that mental representations are thought to make available. A burst of dopamine may come from a reliable conjunct of a piece of chocolate cake, but it does not have the indexical element that desires with propositional content can have. We might say that the content of a desire can contain a specification that one wants this piece of cake. Secondly, a sample can be a sample of an indefinite number of things. That is why, we can remind ourselves, Goodman thought that a sample or example was such only if a symbolic system could be used to specify what the sample or example is of (Goodman, 1968).
I do not have the space to address these problems here. I’ve tried to do so elsewhere. For truth, see https://annejaapjacobson.files.wordpress.com/2014/03/vision-theory-ajj.pdf. And the questions of individuation are discussed here: https://empiricalphilosophy.wordpress.com/2014/11/27/paper-response-to-the-natural-origins-of-content-by-hutto-and-satnet/.
We can see much in the philosophy of mind as treating desires as supervening on neural processes of the general sort we have described, if not with the details we have provided. There is, however, a problem with this. Desires are typically thought to have contents, which provide satisfaction conditions. Accounts of how internal states get the sort of content desires seems to have – such as the desire that I fill up on chocolate or the desire that I rest on the sofa – aim at giving us the truth-conditions for ascriptions of such desires to their possessor. That is, they tell us that, for example, my desire is a desire for chocolate because of how evolution created my tastes or how I was taught to identify things. But even if the account gets the truth conditions right, we need to ask whether what is achieved is anything like a causal element in the scenarios we have been looking at.
If the theorist bringing in desires sees them as reducible to the neural mehanisms we are describing, there does not appear to be a problem with their being causes. Rather, there is a problem with their literally having content. If desires have to have content, arguably reductive desires are not really desires.[Much of what follows here is new and has received not critical feedback. I’d love to hear what you think about it.]
Non-reductive accounts, with their content, do present a very serious problem. The problem is that the elements typically referred to as what creates the content do not add to the causal powers present in the neuro-scientific reductive base. Let us consider the teleological/learning accounts first. The problem can be illustrated by considering Prinz’s improvement of Dretske’s account. Consider, then, the (Prinz, 2000) offered amendment of Dretske’s account:
… the real content of a concept is the class of things to which the object(s) that caused the original creation of that concept belong. Like Dretske’s account, this one appeals to learning, but what matters here is the actual causal history of a concept. Content is identified with those things that actually caused incipient tokenings of a concept (what I will call the ‘incipient causes’), not what would have caused them. (P. 249)
Neither the things that originally caused the tokening nor the causal history of the concept add causal properties to those already present in the reductive base. Similarly, if we want to trace neutrally the path from the frog’s sighting of a speck to its zapping it with its tongue, conjectures or even known facts about evolutionary history are beside the point.
The problem here might be seen as inspired by Kim’s causal exclusion problem (Kim, 2007). However, it seems to me that what I am doing is undercutting the source of the more metaphysical problems Kim finds. These metaphysical problems arise with some kind of overdetermination that non-reductive accounts of the mental may be thought to bring in. I have given reason for thinking that while they do add new truth-conditions, they do not add additional causal powers. That said, appeals to desires may have additional explanatory power. They set the agents and their actions in the context of norms arising from a number of sources, including social.
Explanations in terms of desires, we might say, can refer to the supervenient base or a social context or both. Many, many things get placed in such positions, birthday cakes among them. A birthday cake may have been a perfect gift except for the fact that eating it caused its recipient a terrible rash. Psychoanalytic concerns aside, it is a good bet that the rash was caused by physical composition of the cake, but its being a perfect gift – at least until sampled – concerns its position in a social setting.
Explanations of beliefs, actions and emotions in terms of standard reasons are not, I have argued elsewhere, causal explanations. These arguments are in two papers I link to in the comments on my first post. This is not to say that the neural mechanisms cannot have effects that we do describe as the effects of mental states. Just as the cake can cause a rash, a strong desire might make one shake or cause one to forget something else.
I think there may be a mistake cropping up at various points in the history of philosophy. The mistake we might put as “reading ontology off of logical/semantical form.” This mistake, if mistake it is, is very easily committed with causal statements. Hume took ordinary singular causal statement to provide the ontology onto which natural necessity would or would not fit. (It didn’t fit.) Davidson took ordinary psychological explanations in terms of beliefs and desire to be causal explanations of actions. That involves understanding propositional attitudes as items in our heads, a task at which arguably we have yet to succeed.
Having felt for decades the urgency of finding a place for beliefs, desires and other psychological states in the physical world that does not leave them without any causal role, I am now puzzled by that feeling. I suspect that it rests on my missing a distinction between the sub-personal level described in terms of micro-entities and the sub-personal level described in terms of the interests of the person. Once neuroscience has advanced enough, I think we can lose what we might think of as the anxiety of the inner. That is the anxiety felt that if we cannot get personal traits inside us to have their effects on our muscle movements, among other things, we will be like primitive robots. It seems unpalatable to think of us just as buffeted by these surges of dopamine. But once we realize that the dopamine is part of a very complicated system involving vision and movement, among many other things, which is attuned to provide for basic survival in our niche, “buffeting” seems less appropriate. Furthermore, as our experience extends beyond that of infancy, the dopamine system can respond to new interests, including those taught by our environment. Further, we can of course exercise some control over it, even to the extent to checking ourselves into rehabilitation clinics.
In my next post, I’ll be looking at some more historical issues, but we will also consider some contemporary problems with accounts of concepts, such as Prinz’s amendment of Dretske’s. Red pandas provide a good example for a concept that eludes some standard treatments. We’re ending now with a picture of a very naughty red panda strolling down a sidewalk in D.C. after it escaped from the national zoo.
Goodman, N. (1968). Languages of art; an approach to a theory of symbols. Indianapolis,: Bobbs-Merrill.
Kim, J. (2007). Causation and Mental Causation. In B. P. McLaughlin & J. D. Cohen (Eds.), Contemporary debates in philosophy of mind. Malden, MA: Blackwell Pub.
Kriegeskorte, N., & Kievit, R. A. (2013). Representational geometry: integrating cognition, computation, and the brain. Trends in Cognitive Sciences, 17(8), 401-412. doi: 10.1016/j.tics.2013.06.007
Kriegeskorte, N., Mur, M., & Bandettini, P. A. (2008). Representational similarity analysis – connecting the branches of systems neuroscience. [Original Research]. Frontiers in Systems Neuroscience, 2. doi: 10.3389/neuro.06.004.2008
Laakso, A., & Cottrell, G. (2000). Content and cluster analysis: Assessing representational similarity in neural systems. Philosophical Psychology, 13(1), 47-76. doi: 10.1080/09515080050002726
Montague, R. (2007). Your brain is (almost) perfect: How we make decisions. New York: Penguin Group.
Mur, M., Meys, M., Bodurka, J., Goebel, R., Bandettini, P. A., & Kriegeskorte, N. (2013). Human object-similarity judgments reflect and transcend the primate-IT object representation. Frontiers in Psychology, 4. doi: 10.3389/fpsyg.2013.00128