Daniel Weiskopf, What Decoding Can’t Do

Daniel Weiskopf (Georgia State University) is the author of this third post in this book symposium for the edited volume Neural Mechanisms: New Challenges in Philosophy of Neuroscience (Springer 2021).


Neuroimaging has seen major advances in experimental design and data analysis in recent decades. Among these are new methods, provocatively referred to as mind-reading or brain-reading, that borrow tools from machine learning to decode cognitive processing from ongoing neural activity. These have not only promised to shed new light on the old problems of cognitive neuroscience, but also to herald a revolutionary shift towards greater predictive power that will bring both theoretical and translational benefits.

While decoding methods are undoubtedly impressive, we should be cautious in interpreting their conclusions. Consider the application of decoding to the longstanding problem of reverse inference. Reverse inferences involve moving from evidence concerning neural processes (N) to claims about the occurrence of cognitive processes (C). For instance, under what conditions can we conclude from neural activity in orbitofrontal cortex that a person is experiencing pain versus some other sensation? This problem has been widely discussed among neuroimagers since Russell Poldrack’s influential framing of the problem in Bayesian terms.

Reverse inference, though, can be set within two very different kinds of investigation: functional and predictive. In functional reverse inference, N is taken to be not merely evidence for C but something like its ground—its realizer or implementing mechanism. In predictive reverse inference, by contrast, N is taken to be a reliable indicator of C, but not necessarily anything more. Like other biomarkers, neural processes may be signs of cognitive processes without being directly implicated in their causation or production. This distinction matters because many decoding studies conflate the two, seeming to advance functional claims on the basis of evidence that supports only predictive claims.

A typical decoding study involves scanning participants while they perform tasks that we presume tap into different cognitive processes. This data is used to train a machine learning classifier whose task is to discriminate between patterns that belong to different categories by finding a linear boundary between them. Training involves an out-of-sample validation procedure of dividing the overall data set into training and test sets, with the latter being reserved in order to assess the classifier’s performance. Classifiers are trained several times over using different subsets of the data and their performance measured as average accuracy over all test runs.

Classifiers trained in this way can successfully decode a range of perceptual and cognitive processes. These include visual percepts under conditions of binocular rivalry, the direction of attention to different parts of an ambiguous stimulus, the presence and degree of pain vs. heat sensations, conscious intentions to act, and property judgments such as whether an object is goal-appropriate or not. The operative assumption in these studies is that being able to decode the occurrence of a cognitive process from a region’s activity licenses reverse inferences concerning the function of that region.

This assumption faces three interconnected problems, however. The first is that classifiers are highly sensitive to activity that isn’t necessarily part of the process being carried out. One study, for example, decoded reinforcement processing in 30% of all voxels, covering “virtually every major cortical and subcortical division.” Whatever predictive value these results have, they shouldn’t be taken as sufficient to warrant functional assignments.

The second problem is that classifier performance embodies a complex set of trade-offs. In particular, depending on how certain parameters are set, greater degrees of accuracy can be purchased at the cost of more variable and less biologically interpretable weight maps. If we cared solely about accuracy this might be unproblematic. But classifier weights are sometimes taken to reflect neural “ground truths” about the regions that are responsible for the observed performance. We may misinterpret what is driving decoding if we don’t attend to these effects of parameter choice.

Third, and perhaps most crucially, classifiers are not designed to track causes. This is clear from their native uses in machine learning tasks such as handwriting analysis and facial recognition, but it is equally true when they are transplanted to neuroimaging contexts. There is no inherent causal directionality built into classifiers; indeed, they work just as well in an anti-causal direction. Nor do they capture information about interventions or probabilistic dependencies, in contrast with other automated methods of hunting causes. Attempts to use them for causal analysis, many of which display considerable ingenuity, ultimately need to face up to this basic limitation.

The upshot, then, is that while decoders can excel at inferring what cognitive processes a person is carrying out, they aren’t optimal tools for functionally interpreted reverse inference. It’s perhaps not surprising that in tandem with their adoption there has also been a movement to reorient neuroscience away from explanation and towards prediction as a primary epistemic goal. Whether we should let our tools delimit our epistemic ambitions in this way is a large question for future work.

Check the original chapter on the publisher’s webiste: https://link.springer.com/chapter/10.1007/978-3-030-54092-0_5

Back to Top