Asks Ken Aizawa. Answer: Never.
Thanks a lot to Arnold, Anibal, Eric, Tony, and Ken for a great discussion of whether we may ignore neuroscience. The bottom line is that I’ve been accused of “methodological imperialism” for suggesting that philosophers of mind ought to take neuroscience into account. Well, I don’t like the label, but I’m happy to plea guilty!
Descartes took what he knew of the nervous system into account, and with good reasons. He recognized that the brain is the machinery behind our behavior (getting directions from the non-physical mind, of course). He wasn’t always right, but he did his best. Since then, the reasons for taking neuroscience into account in explaining the mind have steadily gotten stronger. So, there was never a time when it was ok to ignore neuroscience.
What about metaphysicians of mind? Don’t they get to ignore empirical evidence, including neuroscience (Tony asks)? I still have to be convinced that you can discover much about the metaphysics of mind purely a priori. So, I suspect that to some extent, even metaphysicians of mind would benefit from studying neuroscience. (For an example of what I mean, see my “The Ontology of Creature Consciousness: A Challenge for Philosophy”, under “works” on my website.)
What about Chomskian linguists? Don’t they get to ignore neuroscience (Ken asks)? Not if they want to understand the causal mechanisms behind our linguistic competence. Let’s not forget that neuroscience is not just about neurons. It’s also about neural systems, including those responsible for language.
Well, alrighty then. Where folks thought I was attributing too strong a view to you, in fact, it appears that I was attributing to you to weak a view.
So, let me get this straight.
You think that anyone who has ever been interested in any way in the mind should have been studying the brain.
It’s not just that anyone who is now interested in any way in the mind should be studying the brain.
It really is that anyone in the history of life on earth who has ever been interested in any way in the mind should have been studying the brain.
I certainly appreciate Gualtiero’s sentiment, especially the naturalistic trend toward anti-apriorism. That to me seems the most welcome, fruitful, and widespread change in philosophy of mind. (Compare this to the logical positivists who had very little to say about mind).
To me this includes not just a sensitivity to neuroscientific data, but perhaps more importantly to psychology. In psychology we made
great strides in the absence of neuroscience. Indeed, this was the
entire starting point of Skinner’s work: the thesis that there can be a
science of behavior “in the meantime”, until neuroscientific methods
give us something to hang our hat on. The behaviorists discovered many
interesting features of behavior. The psychophysicists discovered many
interesting facts about perception in the absence of neuroscientific
theorizing. People studying short term memory found that seven plus or
minus two items could be held in short term memory. The list could get
very long.
Ultimately all of these psychological discoveries will wed neuroscientific progress, but we’re a long way off. And then we’ll be better posed to get at the metaphysics. I think the ontological questions being attacked by Chalmers, Dennett, Churchlands, etc will be quite interesting to look at once the wedding takes place. That’s why I think it is a bad time to be a philosopher of mind. The low hanging fruit have been plucked, and the big conceptual innovations will come from those gnawing on data.
Why would the Chomskyan linguist want to find out about the causal underpinnings of competence? It seemed like Chomsky made the distinction between competence and performance in order to ignore as much of those messy computational and implementational issues as possible. Neuroscience is surely relevant to linguists who are dealing with performance issues, but it seems somewhat removed from competence issues. Is the idea that it is relevant in evaluating claims of the grammar being instantiated in the brain? That seems more like it is dealing with the neural implementation of a grammar than the causal mechanism of competence. Maybe we are understanding competence differently.
The claim
is that even to get the competence story right we’ll need neuroscience. The problem is that the behavioral evidence upon which linguists almost
exclusively rely underdetermines the grammatical theory (which phrase
structure grammar is right, or is it even a phrase structure grammar or
some connectionist net that gives similar results in certain parameter
regimes, or something different?).
Even if a grammar gets the input-output relations right that doesn’t mean the representational formalism used to get that I-O is correct. To determine which functional decomposition is the right one, some searching under the hood will be required, as is typically (always?) the case with biological explanations of proximate causes. I’ve never really understood why people have fixated on this one aspect of biology, judgments of grammaticality by linguists, and so confidently claimed that for this case the explanation will be different. (ignoring the ‘implementation level details’ was even tried with
digestion and it was something of a disaster (Bechtel has a bit on this
<a
href=”https://mechanism.ucsd.edu/~bill/research/bechtel_characterizingoperations.webversion.pdf”>here</a).
I tend to think the more complicated the phenotype, the more likely that our initial hypotheses about what explains it will be wrong.
Here is a link to the Bechtel paper, I forgot the comments don’t read HTML.
Well, Ken, now it does sound like you are attributing me too strong a view :-).
There are implicit domain restrictions in my claim. So it depends on how you interpret “anyone who has ever been interested in any way in the mind should have been studying the brain”.
For one thing, there is a sense in which anyone who is studying the mind is studying the brain. I am a physicalist, after all.
But I don’t mean to say that anyone who is interested in the mind should do only neuroscience. For one thing, neuroscience didn’t exist in any serious scientific form until a few centuries ago (to be generous). For another thing, there are many useful ways of gathering data about the mind, including, e.g., psychophysics (as Eric pointed out).
But insofar as we are interested in how the mind works, we should constrain our theories with whatever evidence we have about relevant neural mechanisms. There is no reason to base our theories only on behavioral data when we know relevant facts about the internal mechanisms.
My polemic is mostly with certain versions of functionalism that have dominated the philosophy of mind in the last few decades as well as certain methodologies that have been influential in psychology in the last few decades. According to such views, psychologists and philosophers of mind need not concern themselves with neuroscience–at all. And that’s wrong.
Again, there is nothing wrong with gathering data about language by purely “linguistic” methods. And it may be useful to capture some of the regularities in your data by a formal theory of competence (unconcerned with the mechanisms). Of course, as Eric points out, generally there are many different ways of capturing the regularities in your data, and there are questions about which formalism is empirically most adequate that cannot be settled without considering the implementing mechanisms (e.g., what’s the best way to represent the inputs, internal states, and output?). The biggest problem arises when we try to figure out the mechanisms behind linguistic behavior. Why would you constraint your theory only with behavioral data when there is relevant information about the (neural) mechanisms?
Shawn wrote: “That seems more like it is dealing with the neural implementation of a grammar than the causal mechanism of competence.”
What else do you take the causal mechanism of competence to be, besides the neural implementation of a grammar (broadly construed)?
>But insofar as we are interested in how the mind works, we should
constrain our theories with whatever evidence we have about relevant
neural >mechanisms. There is no reason to base our theories only on
behavioral data when we know relevant facts about the internal
mechanisms.
But, here is what was implicit in the stuff to which Anibal was referring. Of course we knows lots and lots of facts about the brain. The problem is that Chomskyan linguists and neuroscientists just don’t seem to have an account of how the brain facts bear on their linguistic facts. Of course, we should not ignore relevant facts. The problem is that we do not know what facts we now have are relevant. That’s what I am suggesting Poepple might have meant when he said that neuroscience and linguistics are incommensurable. He’s not making a claim about “ultimate” neuroscience and “ultimate” linguistics. Maybe he’s making a claim about the here and now use of linguistic and neuroscientific facts circa 2008.
What I think Tony and I are saying is not that there are no important neuroscientific facts. What we think you ought to admit is the possibility that the facts we have now may not be relevant to some issues in the philosophy of mind. Of course, some philosophical ideas have neuroscientific facts that are relevant to them, but you seem to be assuming that all philosophical ideas have neuroscientific facts that are relevant to them, and indeed that all philosophical ideas have always had relevant neuroscientific facts.
Despite my strong braincentered approach, Ken is pointing to the crux of the matter: given our current knowledge and understanding of the brain with respect to how we can bridge the divide between the units of one side (in our inherited example case of language: words, meanings, sentences and other linguistic operations)and the units of the other side (neurons, populations of neurons and other neural mechanisms: synchronization, oscillation…)we simply have no idea how to relate them.
But this position circa 2008 (as Ken also admits though inversely as a caution note with the fact of being very pleased with neuroscience), could be accused of lack or poverty of imagination-argument as Poepple himself acknowledges.
In medieval times, diseases were thought to be the effect of some divine punishment, we now know that this statement is NOT very plausible.
The same historical case in medicine is similar to the way language is currently perceived.
But, against the general chomskyan views about language and its relation to neuroscience (brains), because not only the NFL= the narrow faculty of language (the recursion mechanisms), but the BFL= broad fauculty of language (shared primate traits involve in communication) are needed to be in one manner or another to be implemented in our brains (CNS), i´m ultimately convinced, and also the scientists investigating language as a window to the brain or viceversa, that without brain there are not mind, and of course language is an essential part of the human mind, so language needs to be housed in our brains.
I encouraged to follow the disscussion further looking to those pioneers in bringing language back to its home(brain).
Among those pionners are GA Ojemann and N. Geschwind and current invetigators are Friedemann Pulvermuller and Yosef Grodzinsky.
I also recommend the recent review of Pinker´s latest book by Patricia Churchland: Poetry in motion, Nature 450, 29-30 (1 November 2007).
Thanks for this comment, Anibal.
Incidentally, Pinker has a nice reply to this as well. It’s adjacent to some other reply to Churchland as well.
In Neurophilosophy, Pat Churchland asks, rhetorically, “how could the empirical facts about the nervous system fail to be relevant to studies in the philosophy of mind?” If one’s answer is a serious, “They can’t,” then I think that’s off the deep end. (Maybe a person who thinks this has a failure of imagination.)
I think Anibal and Kenneth are somewhat overstating the chasm between neuroscience and linguistics. I think the problem isn’t opportunity, but motivation. There are lots of neuronal models of different aspects of grammar. A sustained attack on the problem of implementing grammars in artificial neural nets can be seen in Smolensky in The Harmonic Mind.
I am more familiar with Elman’s excellent work, such as:Annual Boston University Conference on Language Development. ( PDF)
1. Morris W.C., Cottrell, G.W., & Elman, J.L. (2000). A connectionist
simulation of the empirical acquisition of grammatical relations. In Stefan Wermter and Run Sun (Eds.) Hybrid Neural Symbolic Integration. ( PDF)
2. Lewis, J.D., & Elman, J.L. (2001). Learnability and the statistical
structure of language: Poverty of stimulus arguments revisted. Proceedings of the 26th
3. Elman, J.L. (1990). Finding structure in time. Cognitive Science, 14, 179-211.( PDF).
Of course, these are all really speculative psychological models. Artificial neural networks don’t make strong predictions about neural data, but about behavior, so they suffer from the same shortcomings as Chomskian models. But they tend to be more neuro-friendly than the classical Chomskian, and tend to make more efforts to find the points of contact with neuroscience. Also, it is much easier to envision how to convert connectionist models into biological models, something the Chomskians obviously have a lot of trouble doing. So even if connectionist nets end up “merely implementing” a Chomskian phrase structure grammar (or whatever flavor of Chomsky you like), connectionistm will facilitate the coevolution of the neuropsychological story.
My point has been that there exist areas of cognitive science where it is reasonable not try to use neuroscience in one’s research and I have offered Chomsky’s minimalist work as an example. This has been in response to the idea, roughly, that no one in cognitive science can reasonably ignore neuroscience these days.
Maybe I should have emphasized the Chomskyan minimalism more, because of course there are a bazillion models that combine “brain-inspired” stuff with this or that feature of language processing.
OK, this is reasonable (though I would differ in my example: I’d advert to psychophysics or something where the results are more obvious and the theorizing more conservative).
Ok, then. So, Gualtiero seems to be the last hold out against the separation of minimalism and neuroscience.
I suspect that disputes of this kind can go on interminably as long as the arguments are expressed in nominal or black-box functional terms. The meaning and implications of specific proposals couched this way are so loosely constrained that it is very difficult to point to decisive empirical evidence in support of any particular explanatory account. Psychology itself provides a good example of a discipline that has generated a vast body of very interesting empirical findings. But in over a century and a half of diligent effort, it has been unable to produce a single broadly accepted theory to account for any significant part of its wealth of experimental and clinical findings. I blame its traditional commitment to a purely functional explanatory paradigm for this failure. The value of developing biologically competent explanatory models is that they can be tested and demonstrated to be wrong when they can’t do the job they are supposed to do. When they are right, we can see that too. I understand that this is not “conservative” theorizing, but I think it is much more *informative* theorizing than the “safe” functional approach.
That was a mistake. I had not meant to say “causal mechanism of competence” but rather just “competence.” As stated, it is obscure what the distinction between the neural implementation and the causal mechanism would be. The idea was just that competence seemed to be the term used to designate an abstract grammar that generated the sentences of whatever language. The neural implementation of that grammar gets away from that and into the performance side of things. I think Eric Thomson’s explanation above clears up the problem.
OK, I don’t buy the claim that philosophy of mind cannot be fruiful without neuroscience. There is a lot of conceptual work which makes sense without neuroscience. Or at least without details of neuroscience. Examples: you don’t have to know anything specific about ant brains to say that ants use their environment as memory (so that they have situated cognition). Herbert Simon claimed so without knowing much details of the ant’s neural system. Was he wrong? No, and he could have abstracted from the details.
Another example: you can define crucially important ontological notions, such as supervenience, reduction, emergence, implementation, instantiation while knowing absolutely nothing about the brain structure. You just have to know that there is some structure but you don’t worry about it as long as you’re doing formal ontology and not empirical psychology.
Though I’m naturalist myself and I find lots of philosophical armchair speculation futile without science, there are cases where science data really is not needed that much. Maybe to feel better but not for anything else.
Gualtiero, in your commentary on Merker’s BBS article you make a distinction between creature consciousness and state consciousness. If I understand you correctly, philosophers typically assert that only state consciousness can be said to have phenomenal content. But you suggest that creature consciousness might actually be the ontological basis of conscious experience (possessing phenomenal content). I have not framed the neuronal (ontological) basis of consciousness in such binary terms. Instead, I have proposed that we consider three different levels of consciousness according to the activation level of the brain’s retinoid system (Trehub (2007). Space, self, and the theater of consciousness. *Consciousness and Cognition*).
I have suggested that as a first approximation within my theoretical framework, we can distinguish at least three different levels of consciousness. The first (C1) is minimal awareness — what I have referred to as the primitive sense of
centeredness within a surround. The brain correlate of C1 would be excitatory
activity, beyond some critical threshold, in the cluster of neurons that constitute
the self-locus, and in the other autaptic cells of the 3D retinoid . It is assumed
that this would normally occur only if there is sufficient corticipetal arousal from
the reticular activating system. The second level would, in addition to C1, include activation of I! (the brain’s neuronal token of the self-locus) so that consciousness C2 would be characterized by a minimal sense of oneself at the origin of egocentric space. At the third level (C3), alert awareness, the conscious brain would include activation of all the sensory/cognitive mechanisms that are synaptically linked to I! (see Fig. 8 in SSTC). To what extent would this formulation be consistent with your views about creature consciousness and state consciousness?
Thanks for your comment, Arnold. What you say sounds entirely consistent with my suggestion.
I am amazed that the impact brain research is having on psychiatry, psychology, and philosophy hasn’t been more recognized. Consider, for the first time in human history we are acquiring sufficient hard data to explain much of our puzzling behavior. The discovery (by Dr. Michael Gazzaniga et al.) of the left-brain interpreter function provided the key to understanding how we could be tribal territorial animals yet totally oblivious that instincts unconsciously influence our behavior. The importance of this discovery has yet to be realized, since many are implacably opposed to admitting our consistent irrational behavior – like warring – may be caused by instincts.
Additionally, the ability of fMRI scans to detect which modules of the brain are active during cognitive processes provides a crude, but nonetheless incredibly revealing window into how we “think”: it allows testing whether some of our gross assumptions are true or not. For example, it is now possible to test the theory that our brain processes ideas subconsciously by comparing them with existing beliefs, and biasing our conscious reaction accordingly.
We have reached the point where we should begin reinterpreting classical psychiatric/psychological theories and their modern offshoots in terms of actual brain knowledge rather than suppositions.
For those interested, I have summarized the brain research that makes it possible to develop a “tribal programming theory of human behavior” in the book, “Man by Nature: The Hidden Programming Controlling Human Behavior.” (Information is also available at manbynature.com and manbynature.blogspot.com.