Mazviita Chirimuuta, Your Brain Is Like a Computer: Function, Analogy, Simplification

Mazviita Chirimuuta (Edinburgh) is the author of this second post in this book symposium for the edited volume Neural Mechanisms: New Challenges in Philosophy of Neuroscience (Springer 2021).


Science is a project of domestication in which the wild forces of nature are tamed and set to work for human advantage. We need not dwell on the Baconian ideology expressed in the metaphors of “taming” and “setting to work”, and in the very opposition of the human and the natural that is here presupposed. Let us just attend to the word domestication – making something homely, the home (domicile) being the residence of a family, and all that is most familiar. Mary Hesse wrote that analogies in science serve, “to enable the new and unfamiliar to be thought about and described in terms of the familiar” (1955, 353). Scientific analogies domesticate what is wild and incomprehensible in nature by fixating on its similarities with what is to us – or rather, to the scientist – more familiar and better understood.

Kevin Lande argues in this Aeon article [https://aeon.co/essays/your-brain-probably-is-a-computer-whatever-that-means] that the claim that the brain is a computer is either literally true, or a mere metaphor and therefore false. In my chapter, “Your Brain is Like a Computer: Function, Analogy, Simplification,” I argue for a third kind of construal, presenting an analogical interpretation of computational models in neuroscience.  This is depicted in the figure below. To say that a model should be interpreted analogically is to say that the target (i.e. neural system) is like the model is some way, in a manner somewhat dependent on the interests of the scientists, and the techniques they employ. I argue that the central epistemic benefit of the brain-computer analogy is that it permits scientists to draw a distinction between the aspects of neuro-anatomy and physiology that are “for information processing”, as opposed to “mere metabolic support”, thus licencing a radical abstraction away from most the complexity of neural systems.

My view borrows liberally from the work of Mary Hesse, Georges Canguilhem, and Immanuel Kant, all of whom deserve more attention from philosophers of neuroscience! I make my case in opposition to a literal interpretation in which a neural system (e.g. the primate ventral stream) is said to have certain computational properties in common with a man-made computing system (e.g. a deep convolutional neural network trained to classify images), though the two differ in their material properties (substrates). The ventral stream is thus said, literally, to implement some computation – probably not identical to the one running in the deep network, though it is taken as a research goal for computational neuroscience to discover the actual algorithms of the brain. As depicted above, my account of analogy does not presuppose there to be properties that brains and computers have in common, just by themselves. Instead, the similarities are perspectival and dependent on the aims of the researcher. In the process of experiment and modelling, the neuroscientist arrives at an ideal (as opposed to real) pattern, applicable both to the neural and digital system.

This is a departure from Hesse’s account of analogy which posited a basis of properties shared between an analogy source or model, and the target system, the “positive analogy.” My departure goes in the direction of a Kantian humility which Hesse herself related to her work on scientific models. “It is plausible to interpret [Kant’s] attitude to theoretical science,” she wrote, “in terms of possible models which are ways in which we think about the world, but which may not and sometimes cannot be literal descriptions of the world, and in any case can never be known to be such descriptions” (Hesse 1961, 172).

A selling point of the analogical interpretation is that it frees computational neuroscience from having to commit to any of the philosophically contested theories of implementation (criteria for a physical system implementing a given computation), and the need to turn the claim that the brain is a computer into a well-defined scientific hypothesis (which as, Lande emphasises, is a very hard task). A recommendation of the interpretation is that philosophers and scientists should pay more attention to what’s disanalogous between computers and brains.  This is one of many points expressed in Canguilhem’s multi-faceted writing on the machine-organism analogy (1965/2008). If one considers the cognitive capacities that the computational framework has most trouble explaining, such as consciousness, and that the most advanced computer programmes have most trouble replicating, e.g. one-shot learning (Lake et al. 2017), it may well be that the (proverbial) devil is in the (actual) detail – those undomesticated complexities left out in the cold by the brain-computer analogy.

Canguilhem, Georges. 1965/2008. “Machine and Organism.” In Knowledge of Life, edited by Paola Marrati and Todd Meyers. New York: Fordham University Press.

Hesse, Mary. 1955. “Action at a Distance in Classical Physics.”  Isis 46 (4):337-353.

Hesse, Mary. 1961. Forces and Fields: A Study of Action at a Distance in the History of Physics. New York: Philosophical Library.

Lake, Brenden M., Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. 2017. “Building machines that learn and think like people.”  Behavioral and Brain Sciences 40.

Check the original chapter on the publisher’s webiste: https://link.springer.com/chapter/10.1007/978-3-030-54092-0_11

One comment

  1. Tommy Cleary

    Hi Mazviita,

    I like this approach and wonder if you see this type of question as confounding or liberating a philosophy of neuroscience?
    The brain is like a computer is an interesting problem to suppose while it is limited by the current phenomenological encounter of brains in vats interacting with each-other, and even with one’s own self, through an embodied extension of personal computers and vast social networks of computers & epistemic algorithms shaping presences and absences, both sensory and intellectual.

    Just as text/logos itself has long become a part of the organic cyborg of the lived encounter of being a brain, so too living in an everyday where computing is embedded in the lives of brains, the context of the question has changed since it was first proposed such that the hermeneutical process itself that helps separate the question of literal and analogical interpretations is confounded and at the same time set free,

    In gesturing in this direction I would like to propose that the wild is finding its way back in…and we are living it.

    Just as at one time to ask what it is like to be a bat highlighted the shared and unique aspects of mind & brain…literal or analogical our embodied saturation with the lived encounter of computers and computing extends this philosophical encounter beyond its original premise. Something has changed, and something has stayed the same.
    What is transformative is that (literal or analogical) even small but widespread changes in what it is meant by a computer have an enormous impact on what it means to be a brain even as brains are constantly pushing the limits of what it means to be a computer, as both processes are entwined with the telos of seeking other and seeking encounters of phenomenological significance.

    Both logos and computers as a means for this wild telos I see as a continuum more than a transcendent rupture.

    How do you see this mind/brain/computer predicament effecting the context of your question?

    Kind regards
    Tommy

Comments are closed.

Back to Top