The following six theses characterize what I will call Fodorian Philosophy of Psychology (FPP, for short):
- There are psychological laws
- Psychological laws are required for psychological explanations
- Predicates used to express scientific laws pick out genuine properties
- Genuine properties are properties that ground causal relations
- Psychological properties are functional properties
- Functional properties are not reducible to physical properties
To be sure, each of the theses could use some unpacking; for my current purposes, however, I hope that each is clear enough to be sufficiently understood.
I think lots of people are attracted to FPP. I also think lots of people don’t think FPP is true (including some of those who are attracted to it). The issues here are terrifically complicated, of course, and the sum total of books and articles spawned by these issues constitutes one of the lengthier footnotes to Plato (among the many articles worth reading, I especially like Rob Rupert’s “Functionalism, Mental Causation, and the Problem of Metaphysically Necessary Effects” (Nous 40, June 2006) and Brian McLaughlin’s “Is Role-Functionalism Committed to Epiphenomenalism?” (Journal of Consciousness Studies 13, 2006)).
Most of the hand wringing seems to arise from the thought that functional properties are phony. They do not have “causal bite,” and talk of functional properties is just that—talk. There may be functional “predicates” or “concepts,” but no functional kinds (or, at any rate, nothing causes anything else to happen in virtue of instantiating a functional property).
When I give my travel reports to working psychologists, I am often looked upon as a 21st century Gulliver. Or a 21st century Cassandra. Or just another wolf-crying boy. I don’t know which is worse.
Now, working psychologists don’t speak with one voice, and there is considerable variability in their familiarity with the philosophical literature. Many of them incline towards functionalism, however, and when I dare to suggest that there may be some naked-emperorism in their field, they threaten to pick up their toys and go home.
Once the initial wailing and gnashing of teeth subsides, however, my psychologist friends and I usually get to the heart of the matter. “Why Martin” they plead, “surely we can get away with saying this”:
There are two kinds (at least) of kinds: structural kinds such as water or tiger, and functional kinds such as mouse-trap or gene. A structural kind has a “hidden compositional essence”; in the case of water, the compositional essence is a matter of its molecules consisting of two hydrogen molecules and one oxygen molecule. Functional kinds, by contrast, have no essence that is a matter of composition. A certain sort of function, a causal role, is the key to being a mousetrap or a carburetor. (The full story is quite complex: something can be a mousetrap because it is made to be one even if it doesn’t fulfill that function very well.) What makes a bit of DNA a gene is its function with respect to mechanisms that can read the information that it encodes and use this information to make a biological product.
Now the property of being intelligent is no doubt a functional kind, but it still makes sense to investigate it experimentally, just as it makes sense to investigate genes experimentally. One topic of investigation is the role of intelligence in problem solving, planning, decision making, etc. Just what functions are involved in a functional kind is often a difficult and important empirical question. The project of Mendelian genetics has been to investigate the function of genes at a level of description that does not involve their molecular realizations. A second topic of investigation is the nature of the realizations that have the function in us, in humans: DNA in the case of genes. Of course, if there are Martians, their genes may not be composed of DNA. Similarly, we can investigate the functional details and physical basis of human intelligence without attention to the fact that our results will not apply to other mechanisms of other hypothetical intelligences.
Both types of projects just mentioned can be pursued via a common methodology, a methodology sometimes known as functional analysis. Think of the human mind as represented by an intelligent being in the head, a “homunculus”. Think of this homunculus as being composed of smaller and stupider homunculi, and each of these being composed of still smaller and still stupider homunculi until you reach a level of completely mechanical homunculi. (This picture was first articulated in Fodor(1968); see also, Dennett (1974) and Cummins (1975).)
Suppose one wants to explain how we understand language. Part of the system will recognize individual words. This word-recognizer might be composed of three components, one of which has the task of fetching each incoming word, one at a time, and passing it to a second component. The second component includes a dictionary, i.e., a list of all the words in the vocabulary, together with syntactic and semantic information about each word. This second component compares the target word with words in the vocabulary (perhaps executing many such comparisons simultaneously) until it gets a match. When it finds a match, it sends a signal to a third component whose job it is to retrieve the syntactic and semantic information stored in the dictionary. This speculation about how a model of language understanding works is supposed to illustrate how a cognitive competence can be explained by appeal to simpler cognitive competences, in this case, the simple mechanical operations of fetching and matching (Ned Block, “The Mind as the Software of the Brain”).
They continue: “My dear Martin, who on Earth could have ever thought that, in providing explanations via functional analysis, we psychologists were committed to functional-causal laws?! We’re not interested in explaining single events by appealing to laws (we don’t need no stinking causal laws); we are interested in analyzing complex capacities into simpler capacities, showing how the former result from the latter. But capacities are not events, silly man, and when we characterize components functionally, we do so in order to show how the effects that these various components contribute can bring about the (typically more complex) capacity we are interested in explaining. Now go away.”
I tried passing this news along at the 2009 Eastern APA, but I’m not terribly confident that any of it stuck. My psychologist friends are getting impatient, and I need something to tell them. Can you help me out?
Thanks.
Tell them to read Bill Bechtel, who understands how biologists work, unlike Fodor.
You might want to take a Kantian perspective.
Not just mouse-traps, but genes and water are all of one kind. Once we start attributing self-identifying functions or properties to any of them then we are guilty of transcendental realism, or in plain language, animism and anthropomorphism.
Most scientists believe in animism and the psychologist is no exception.
Regarding intelligent “homunculi” in the brain (where is ‘in’ in the brain we might ask), we might consider intelligence as not a property at all. Assigning intelligence to matter is animism. The concept of intelligence is a broken concept. Is the human brain more intelligent than the human spleen? and is the human spleen more intelligent than the mouse spleen?
Philosophical psychology has too many metaphysical curio’s and hybrids.
Martin,
Thanks for this nice post. I’m very interested in this topic, but I am not sure I understand your question. Is the “news” you are trying to give that functional analysis does not entail causal laws? If so, it depends of course on what you mean by causal laws. I think Fodor uses “causal law” quite liberally, in a way that includes the way the capacities of functional systems manifest themselves. (But I agree that it is unhelpful and can be misleading to talk about laws in this loose way.)
Hi Gualtiero,
Thanks for your response.
I certainly agree that plenty of generalizations will hold of a system. For example, suppose the capacity of my calculator to do multiplication is a result of its structured capacities for adding, subtracting, and so forth (as specified by some program). Further suppose that these analyzing capacities are exercised by various physical components the machine. If that’s the case, then certain (cp) generalizations will be true of the machine, e.g., generalizations concerning what it will output, given a certain input.
Are these causal generalizations? I suppose we can call them that, but whether they are or not, I don’t think they play a substantive explanatory role in the science. It’s the capacity analysis and the identification of components that perform the analyzing functions that are of principle interest.
I worry that, in the metaphysics of mind literature, these points tend to get lost. Critics argue that having a functional property does not cause anything to happen, or that saying such violates certain Humean strictures on causal relations. Now, perhaps one could offer reasonable responses to these critics, but if you are principally worried about the science, then I don’t see how much hangs on it. I do think it is ludicrous to say, e.g., that an adder produces its effects “in virtue of” being an adder (that gets it precisely backwards!). But I also think that the benefits of functional analysis do not rest on establishing otherwise. I also agree with Eric that many of the same points apply to biology. So maybe the culprit here isn’t suspicious metaphysics, but rather suspicious philosophy of science.
Thoughts?
I think I agree with you on the issues that concern you, although I prefer to talk about mechanistic explanation rather than functional analysis. I think functional analyses are just “sketches” of mechanisms (in the sense of Machamer, Darden and Craver 2000). (I’m trying to write a paper defending that thesis.)
Hi Martin,
I do think it is ludicrous to say, e.g., that an adder produces its effects “in virtue of” being an adder (that gets it precisely backwards!).
it sounds as if the first use of ‘adder’ is meant to refer to something different from the second use of the term, but it’s not really clear what either of them are specifically referring to.
I’m thinking of an adder as something that takes pairs of numbers as input and produces (their respective) sums as output. The idea is that anything that does this is an adder, but having the property of being an adder does not explain why anything does this.