The Brain Abstracted – Overview and Precis

This week the Brains Blog is hosting a symposium on Mazviita Chirimuuta’s new book The Brain Abstracted: Simplification in the History and Philosophy of Neuroscience (Open Access: MIT Press). Today’s post from Chirimuuta provides a precis and overview of the content of the book. Through this week, we will have another four posts from Chirimuuta summarizing central arguments within the book as well as four commentary posts from Mark Sprevak (University of Edinburgh) on Tuesday, Carrie Figdor (University of Iowa) on Wednesday, Dimitri Coelho Mollo (Umeå University) on Thursday, and Tina Röck (University of Dundee) on Friday.

-Trey Boone, Associate Editor

The Brain Abstracted tackles the question of how we should interpret neuroscience for the purposes of doing philosophy of mind. Neurophilosophy rests on the premise that the findings presented in the theories and models of neuroscience are directly relevant to longstanding philosophical topics such as the nature of perception and agency. Insufficient attention has been paid to the challenge of brain complexity and how it fundamentally shapes neuroscientific practice. Given that all models and theories in neuroscience are highly simplified, relying on numerous abstractions and idealisations, as well as experimental controls which reduce the complexity of datasets elicited, it is reasonable to worry that such results may not be informative about the inherent natures of the neural processes associated with the cognitive kinds of interest to neurophilosophers. To make a frivolous comparison, if science had only delivered models of free fall under conditions of zero air resistance, such findings would be of no relevance to people contemplating the floating and gliding phenomena observable in the skies around them.

That is the worst case scenario for neurophilosophy: that neuroscience has not even begun to investigate the aspects of the brain underlying the mental capacities of interest to philosophers. However, that’s not the picture I present in the book — I don’t think the situation is quite so drastic. Instead, the central portion of book offers case studies on the various simplifying strategies that neuroscientists, past and present, have employed in order to address the challenge of brain complexity; the first two chapters situate the topic historically, and in the final three chapters I draw out the philosophical implications of the case studies.

Here is a summary of the chapters:

Chapter 1 (Introduction) describes the challenge of brain complexity.  The human brain is perhaps the most complex object that scientists have attempted to investigate. Some features that make it so complex are the number and heterogeneity of its parts (both neurons and the glial cells that we often forget), its tendency to change across time due to plasticity across many scales, and the fact that order and pattern can be found across many scales in neural systems meaning that there is no one privileged level of investigation. Three broad simplifying strategies are introduced in relation to their development in the history of physical and biological sciences: mathematization, reduction, and the forming of analogies.

Chapter 2 (Footholds) presents the philosophy of science framework to be employed in the rest of the book. It is argued that traditional scientific realism neglects the efforts that life scientists put into preparing their objects of investigation so that the results obtained are as intelligible and consistent as possible.  Scientific realists assume that the simple, stable relationships presented in theories and models are just discoveries made through careful experimentation. Instead, for complex biological systems, we should understand these relationships as being to some extent generated by the process of investigation which employs simplifying strategies such as use of controlled experimental conditions and idealisation in modelling. The alternative to traditional scientific realism is called haptic realism because it emphasises that scientific knowledge is the result of the researchers’ active meddling with their objects of investigation. Like our hands, scientific models are both channels for knowing about things in the external world, and the means by which we manipulate those things.

Chapter 3 (The Reflex Theory) begins the series of case studies. This theory, dominant in the late nineteenth and early twentieth centuries, supposed that brain processes could be fully explained in terms of sensory-motor arcs like those recently discovered to explain spinal reflexes such as the knee jerk. This was a highly reductionist approach which appealed to many researchers because it was thought comparable to the successful analytical approaches of classical physics. However, many contemporary critics argued that it grossly under-estimated the complexity of the brain and nervous system and was for that reason inadequate.

The reflex theory was superseded by the computational theory of the brain in the mid twentieth century. Chapter 4 (Your Brain Is Like a Computer) gives an account of the rise of computationalism due to it making available a new simplifying strategy via an analogy between brains and relatively simple computing machines. I go on to argue against standard interpretations of these models as literally representing computations that occur in the brain. (This will be the subject of tomorrow’s post.)

Chapter 5 (Ideal Patterns and “Simple” Cells) is a detailed case study of the interaction between simplifying strategies within modelling and experimental design. I examine classic models of primary visual cortex and discuss how more recent approaches have attempted to encompass more of the actual complexity of this brain region by using ethological experimental methods.

Chapter 6 (Why “Neural Representations”?) argues that talk of basic sensory responses being representations of external objects is justified when considered as a simplifying strategy. (This will be the subject of Thursday’s post.)

Chapter 7 (The Heraclitean Brain) focusses on the complex changeability of the brain with a case study of research on motor cortex. It is argued that recent dynamical systems models deal with the dynamism of cortical physiology through a reduction of change to a set of fixed laws and parameters held to govern the system.

The final three chapters consider the wider philosophical implications of the centrality of simplification within neuroscience. Chapter 8 (Prediction, Comprehension, and the Limits of Science) makes the case that there are limits to what neuroscientists can be said to know about the brain because, strictly speaking, most of the results concerning the neural basis of cognition hold only under the controlled conditions of the laboratory. (This will be the subject of Friday’s post.)

Chapter 9 (Revisiting the Fallacy of Misplaced Concreteness) shows that the computational theory of mind commits the error of neglecting the difference between highly abstract models and the complicated, concrete neural systems they target. (The argument against Strong AI that follows from this observation is the subject of Wednesday’s post.)

Finally, Chapter 10 (Cartesian Idealization) argues that the mind-body dualism that still bedevils the computational theory of mind has its roots in the fact that positing a clean separation between brain, body and external environment is itself a simplifying strategy. The final statement of the book is that the norms of rigour in quantitative modelling may force scientists towards this idealising assumption but in such cases philosophers do well to work autonomously from the science in order to avoid the dualistic consequences.

One comment

  1. Pingback: Mini-Heap - Daily Nous

Comments are closed.

Back to Top