Rosa Cao and Dan Yamins livestream “Making sense of mechanism” on May 22

We are excited about the next Neural Mechanisms webinar this Friday. As always, it is free. You can find information about how and when to join the webinar below or at the Neural Mechanisms website—where you can also join sign up for the mailing list that notifies people about upcoming webinars, webconferences, and more! You can also see prior Neural Mechanisms sessions on the Neural Mechanisms YouTube channel a few months after each session.

Making sense of mechanism: How neural network models can explain brain function

Authors
Rosa Cao & Daniel Yamins
(Stanford University)

22 May 2020
at h 15-17 Greenwhich Mean Time*
*14-16* Greenwhich Mean Time

(Convert to your local time here)

Abstract.​ Neural networks now show impressive performance at increasingly interesting tasks, including perceptual tasks that are easy for humans but historically difficult for artificial systems. It has been widely claimed however, that these neural network models are in no way explanatory for neuroscience – either because they are too unlike the brain, or because they are themselves unintelligible. Less widely appreciated, however, are a series of recent results showing that neural network models trained to classify images are also surprisingly good at predicting neural activity. In fact, these models now provide some of our best predictions of visual neural response properties, despite never being trained on any brain data at all (Yamins et al 2014, Cadena et al 2019). This paper addresses two questions: Why are these models so successful at predicting brain activity? And can their predictive success tell us anything about how the brain itself works? To do so, we first define an augmented notion of mechanistic mapping, inspired by Kaplan and Craver (2011, 2018), which will call 3M++, and show that spike-rate HCNN models of visual cortex satisfy this extended notion. Second, we draw parallels between implicitly evolutionary explanations in neuroscience and the constraint-based methodology that gives rise to HCNN models. In both cases, facts about the shape of the ethological problems that they face help to illuminate why functional systems have the particular features that they do.

Join the online session (up to 10 minutes before it begins) | Read the paper

Related: How to connect to Neural Mechanism Webinars

Back to Top