J. Brendan Ritchie livestreams “Biological Plausibility, Mechanistic Explanation, and the Promise of ‘Cognitive Computational Neuroscience'” on June 5!

We are excited about the next Neural Mechanisms webinar this Friday. As always, it is free. You can find information about how and when to join the webinar below or at the Neural Mechanisms website—where you can also join sign up for the mailing list that notifies people about upcoming webinars, webconferences, and more! You can also see prior Neural Mechanisms sessions on the Neural Mechanisms YouTube channel a few months after each session.

Biological Plausibility, Mechanistic Explanation, and the Promise of “Cognitive Computational Neuroscience”

J. Brendan Ritchie (KU Leuven, Belgium)

5 June 2020
14-16 Greenwhich Mean Time
(Convert to your local time here)

Extended Abstract

In the last few years the nascent field of “Cognitive Computational Neuroscience” (CCN) has emerged, bringing together researchers from cognitive science, artificial intelligence, and neuroscience with the shared aim of identifying the neurally implemented “computational principles” that underlie perception, action, and cognition (Naselaris et al., 2018). Realizing this aim requires developing and evaluating “biologically plausible” computational models that can account for both neural and behavioral datasets (Kriegeskorte and Douglas, 2018). However, what exactly is meant by claims of biological plausibility is often left opaque. I try to shed some light on such claims using the conceptual resources of the mechanistic approach to explanation from the philosophy of science. In particular, I propose that claims of biological plausibility are best understood as pertaining to the relative mechanistic plausibility of a model. I go on to apply this account to the evaluation of two methodological case studies: (i) so-called “model-based” approaches that relate formal models of behavior from mathematic psychology to neural signals; and (ii) deep neural networks that have been increasingly compared to both neural and behavioral datasets.

Neural networks now show impressive performance at increasingly interesting tasks, including perceptual tasks that are easy for humans but historically difficult for artificial systems. It has been widely claimed however, that these neural network models are in no way explanatory for neuroscience – either because they are too unlike the brain, or because they are themselves unintelligible. Less widely appreciated, however, are a series of recent results showing that neural network models trained to classify images are also surprisingly good at predicting neural activity. In fact, these models now provide some of our best predictions of visual neural response properties, despite never being trained on any brain data at all (Yamins et al 2014, Cadena et al 2019).

This paper addresses two questions: Why are these models so successful at predicting brain activity? And can their predictive success tell us anything about how the brain itself works? To do so, we first define an augmented notion of mechanistic mapping, inspired by Kaplan and Craver (2011, 2018), which will call 3M++, and show that spike-rate HCNN models of visual cortex satisfy this extended notion. Second, we draw parallels between implicitly evolutionary explanations in neuroscience and the constraint-based methodology that gives rise to HCNN models. In both cases, facts about the shape of the ethological problems that they face help to illuminate why functional systems have the particular features that they do.

Join the online session (up to 10 minutes before it begins) | Read the paper (forthcoming in BJPS)

Related: How to connect to Neural Mechanism Webinars

Back to Top