When I was in graduate school at Pitt around the late 1990s, I hung out with some faculty and students in the Psych Department. One day I asked one of the more ambitious Psych grad students, “what’s the future of psychology?” He answered without hesitation: “cognitive neuroscience”. Since then, psychology has become more and more integrated with neuroscience, and many members of psychology departments are happy to label themselves “cognitive neuroscientists”.
I have tried to articulate why this transition is both principled and warranted. In a nutshell, the cognitive states and processes of critters such as ourselves are neurocognitive states and processes, so there is no good way to discover and investigate them without taking into account data about neurocognitive systems. See my recent book for a more detailed account.
Given how far the discipline has come, you might think that the need to integrate psychology and neuroscience into cognitive neuroscience would hardly be controversial anymore. Nevertheless, skeptics raised to think that psychology is autonomous still “do not expect cognitive neuroscience to replace psychology anytime soon“. But “replace” is the wrong word here. What needs to happen, and has happened to a considerable degree, is that psychology as a whole evolve into cognitive neuroscience, which simply takes more data and constraints into account than its predecessor.
For ambitious young philosophers, there is plenty of work to do to contribute to a more integrated science of the mind. As always, I’m happy to serve of dissertation committees pushing in this direction.
Could you give me an example of some cognitive process which we have come to understand better by taking something neuroscientific into consideration?
Max C
Hi Max, I see you are still into this challenge after 16 years from the Cortex forum 🙂
I think your challenge is not easy to address for the reasons that Adina Roskies explained a dozen years ago or so — you do not seem to be willing to buy any sort of systematicity in function-structure mapping, so the challenge might be rigged.
But let me try nonetheless. We might have learnt a few things about face processing thanks to neuroscience. For instance, the finding that face perception and some holistic object recognition share the neural substrate (but only in experts of those objects!) paves the way to an expertise-based account of face perception (see Gauthier or Bilalic’s works). But the impressive stimulation by Schulkin et al 2017 on PNAS, where people see ‘facephenes’ over balls seems to suggest that the (abstract) concept of face is part of the hidden ontology of our brain.
Are these findings conclusive? Nay. But that applies equally to all findings.
Moreover, if you include “providing grounds for new hypothesis”, the oft-blamed reverse inference can provide grounds for many hypotheses (see Calzavarini and Cevolani here http://philsci-archive.pitt.edu/20134/ )
Memory. Perception. Motor control. Pain. Sleep, dreaming, and hallucinations. Alzheimer’s dementia. But limiting this to “cognitive” is to tilt the playing field in a very strange way, that is much too friendly to anachronistic ways of thinking (I know the post did this, but it is sort of arbitrary)
I’d take away that tendentious limitation and ask how well you can explain invertebrate behavior (such as in the bee) without neuroscience? And can you tell us a priori that there is nothing cognitive going on there? Or, say, in “lower” vertebrates such as mice or rats or snakes or songbirds or squirrels etc etc etc.
The co-evolution between neuroscience, ethology, and behavioral research is well underway. Are people still seriously arguing about this online?
It’s really rarely about “reduction” anymore, it is about drawing on whatever tools might be relevant, and not limiting yourself. Behavioral experiments are great. Physics is great. Neural data, simulations, and models are great. Psychological data, models, and simulations can be great. Philosophical conceptual insights can be great. Triangulation of insights from multiple sources in complicated and unpredictable ways seems to be the name of the game.
Turf wars are very 1980s is all I’m saying.
Eric, thanks for your comment, which of course I agree on. Could you please elaborate on this: “limiting this to “cognitive” is to tilt the playing field in a very strange way, that is much too friendly to anachronistic ways of thinking”. When I write “cognition” I mean it in a very broad sense, including what the bee brain does. Is that still too restrictive?
Hi Gualtiero, hi in retrospect I think in my ranting I probably wasn’t totally fair there.
My concern with these discussions when focusing on “cognitive”, especially around philosophers, is we end up going down rabbit holes where only humans end up counting because we are the only known species with language (I’m thinking of Brandom and other expressivists).
But I suppose that isn’t fair, as most people (including philosophers) are not that extreme! They use the term to denote conceptual activity much more broadly construed. E.g., representations that are not tightly locked to sensory inputs (i.e., decoupled representations, which exist on a continuum).
Thanks for asking. I’ll leave arguments about specific processes to experts about them. My own main point goes deeper than any specific cognitive process to impugn the very notions of representation and computation that theorists of cognition should employ. In general, neural computation is neither digital nor analog (at least in the most straightforward sense of “analog”); it is sui generis, which means that it must be understood in its own terms (Ch. 13). Neural representation is also different in fundamental ways from the sorts of representations that are typically manipulated by digital computers (or analog computers, for that matter) and I have a paper under submission that explores this in more detail. We need to couch our explanations in terms of neural computation and neural representation and test them with relevant evidence, which includes evidence about neurocognitive systems. Of course, there is much that we still don’t understand about neural representation and computation, but IMO that’s where progress is going to come from.
Can psychology evolve into cognitive neuroscience without resolving what David Chalmers calls “the hard problem of consciousness”?
Integration of psychology and philosophy of mind with cognitive neuroscience seems to be inevitable, but the accomplishment of it is still far, far away. One of the obstacles in bridging these approaches is the lack of first-person dimension in mechanistic explanatory models. It doesn’t mean that neuroscience should solve the hard problem of consciousness (formulation of which, I think, is based on a misconception), but if mechanistic models aim to explain cognitive experience (and not only a cognitive function of a system), then they must include first-person parameters or at least make them derivable from a model. Still, a lot of work on first-person methodologies needs to be done (appreciate your papers on introspection; DES is an interesting proposal; also, there is, I think, a potential in approaches which draw from continental phenomenology). In my recently published book “Mechanisms and Consciousness: Integrating Phenomenology with Cognitive Science” (Routledge, 2021), I show how to push the first-person phenomenological approaches into a naturalistic, mechanistic direction, the direction which phenomenologically oriented philosophers try to avoid, but to me it is the only explanatory promising direction today.
Prem, thanks for your question. I agree with Marek that psychology can evolve into cognitive neuroscience without solving the hard problem of consciousness. Psychology was a successful science even before being integrated with neuroscience; cognitive neuroscience is just a more sophisticated version of psychology.
Marek, thanks for the great point; I agree. And special thanks for mentioning your book, which I was not aware of. I am going to get a copy and read it ASAP.
Marek, thanks for the response and the reference of your book. This is not my area of expertise, so my comments are rather speculative in nature. Will have to see how your book tackles it, but my concerns so far in tying first-person perspectives to mechanistic models are (a) incorporating tacit knowledge, and (b) consciousness as an emergent phenomenon that is resistant to mechanistic models. Look forward to reading your book.
“(b) consciousness as an emergent phenomenon that is resistant to mechanistic models.”
This is a common misconception about the mechanism that it cannot explain emergent phenomena. Obviously, it depends on a conception of emergence one has in mind. If by emergent phenomena we understand some new properties on a higher level of organization, then mechanistic framework has resources to explain them (for an extended discussion see, e.g. Bechtel & Richardson, Discovering complexity). Roughly speaking, every complex behavior is an emergent phenomenon, i.e. it is produced by organized entities at a lower level, and mechanistic approach aims to describe these entities and their organization. That being said, there is an ongoing discussion whether mechanistic approach is reductive. In my opinion, it is not, at least not in a strong sense. But I understand that some researchers may have such worries and thus oppose to the mechanistic framework.
Another interesting thing is growing literature on mechanistic-dynamic explanations (e.g. Bechtel & Abrahamsen 2010, Golonka & Wilson 2019), which show how we can merge these approaches and explain dynamic emergent phenomena. An interesting case study are mechanistic-dynamical models of epileptic brain (I discuss them in my book, chptr 5.)