Bryce Gessell (Southern Virginia University) is the first author of this third post in this book symposium for the edited volume Neural Mechanisms: New Challenges in Philosophy of Neuroscience (Springer 2021).
We called our chapter “Prediction and Topological Models in Neuroscience,” and we wrote it in the spirit of Jack Gallant. Let me explain why.
Gallant and his lab are famous for decoding visual images from fMRI data, as in this paper. In 2016 he spoke at Duke’s neuro colloquium, and afterwards a few of us grad students got to have lunch with him. At one point someone asked him about explaining mental processes through brain activity. Here was his response: “I don’t care about explaining anything. All I care about is predicting.” That was kind of a shocker for me as a philosopher, because philosophy of science puts so much emphasis on explanation. Only predicting? I thought. May it not be! What good would that do? And what would Wesley Salmon say if he were here???
But the more I think and read about cognitive neuroscience, the more I think Gallant was on to something. As we show in our chapter, brain data understood through topological and network models can predict amazing things, from mental processes to behavior to disease. The principles behind the models don’t stem from uniquely neural properties of the system, so you can use the same techniques in other fields as well. We mention some examples in our chapter, just to give an idea of the possibilities.
But Gallant was referring to issues deeper than the models’ power. One is that explanation and prediction come apart so easily in cognitive neuroscience that its practitioners can focus on one without paying much attention to the other. Gallant and others can ignore explanation because they are swimming in data and analytical techniques. When you don’t care about fitting the parameters of your model to physical properties, then you won’t care about profligate models as long as they forecast well.
Another issue shows in the overwhelming preference among philosophers and neuroscientists—curious in light of the first issue—for explanation over prediction. What justifies that preference? Sometimes it feels like inertia more than anything. In therapeutic contexts, for example, prediction may be better. Say you had to choose between a model that explains why people get dementia, versus a model that predicts that they will get dementia. Which would you want? I’d take the predictive one. As available data about neural processes grows, possible therapeutic applications will expand as well, and I think the preference for explanation will diminish.
Now, the realist in all of us cries out that a successful predictive model must tell us something about the physical system, because otherwise how could it make good predictions? A predictive model doesn’t float completely free of a physical system; at the very least the predictions are about that system, rather than about some other. There must be an anchor somewhere.
Here’s where I think Salmon might be disappointed in me (and this is also where I speak for myself alone, and not for my co-authors). Computing power keeps rising, and analytical methods keep migrating to neuroscience from other fields. As they do, I think that realist voice inside us will have less and less justification for thinking that predictive models must tell us something about the physical system. I think there will be fewer reasons to believe that the patterns discoverable by machines have discrete correlates in neural processes. Predictive models might still be the best place to start looking for processes. But when the haystack gets so big that only a computer can troll through it, you start to doubt whether the needle is there at all.
Our chapter doesn’t make all these points, but I think they’re the fundamental background issues. Whatever you think about explanation in neuroscience, philosophers have yet to reckon with the problems raised by onslaughts of data combined with time on the campus cluster.
Check the original chapter on the publisher’s webiste: https://link.springer.com/chapter/10.1007/978-3-030-54092-0_3