The Details Might Just Matter

Post 3 of 5 from Mazviita Chirimuuta on The Brain Abstracted (Open Access: MIT Press).

In the previous post I described how the literal interpretation of neurocomputational models encourages us to attend only to the perceived commonalities between brains and computers and to relegate the differences to the periphery of our attention. That is how the computational framework provides a powerful simplifying strategy for neuroscience: it licenses researchers to abstract away from the biological details of neural systems and to focus on quantitative relationships that help explain cognition by showing how those processes, when instantiated in an artificial device, would enable the machine to perform certain tasks that resemble biological cognition.  For example, in recent years deep convolutional networks have been said to offer explanations of how activity in the ventral stream of the primate visual cortex serves object recognition. The artificial neural network (ANN) is a highly abstract representation of a biological neural network (BNN). If the ANN can be demonstrated to perform a task, such as learning to associate image inputs with the correct words for the objects in them, it may be asserted that the ANN is an adequate model of biological object recognition and that we can explain this cognitive capacity by only considering the subset of neural properties that are shared with the ANN. Much research in this area has gone into isolating precisely those similarities e.g. with respect to receptive fields of visual neurons and artificial network nodes.

This explanatory strategy has been possible for the cognitive capacities that have been adequately duplicated in artificial systems. With advances in deep learning the list of these has been growing impressively, to now include language production and speech recognition – two paradigmatically human cognitive tasks. Could ANNs also contribute to the explanation of consciousness? There has been speculation from certain quarters that ANNs such as large language models (LLMs) may be conscious. I do not think that such claims are well founded. In chapter 9 (Revisiting the Fallacy of Misplaced Concreteness) I argue instead that we should not expect ANNs or other computational approaches to provide an explanation of consciousness because the computational framework has an in built explanatory gap.

The “fallacy of misplaced concreteness” is the somewhat inelegant term, coined by A. N. Whitehead, for the error of mistaking an abstraction for the original, concrete item that it abstracts away from. Because abstractions, being relatively simple, are easier, cognitively, to deal with, scientists (and others) can become accustomed to only considering the abstraction and forgetting that there are details left behind in the concrete object that may well be important. I argue that this is the temptation of the literal interpretation of neurocomputational models: to only focus on the perceived commonalities between brains and computing machines and to neglect all the features that brains (and animals with brains) have and computers don’t.

The explanatory resources of the computational framework are in effect limited to this set of commonalities. It is to be noted that consciousness, being a feature that animals but not computers possess, does not fall under the scope of the commonalities. The strategy described above, for the computational explanation of object recognition, depends, firstly, on the ANN performing a task that is similar to that performed by the animal, and also for there to be available relevant information about how the computer is able to perform the task. Neither of these conditions applies to the case of consciousness: current computers are not conscious, nor is computer science (in the broadest sense) able to tell us how a computational device could come to have this feature. This is why I say there is an in built explanatory gap within the computational framework.

The suggestion that then arises is that we should look for explanations of consciousness in terms of the features possessed only by animals and not by computing machines. The computational framework has encouraged many to think of these features – in effect all the biological properties of brains that computers, being non-living artifacts must lack – as background support structures that are separable from the truly cognitive (i.e. computational) processes of the brain. Various lines of research in neurobiology, however, tell us that a clean division between cognitive and non-cognitive properties of brain tissue cannot be drawn. If this is the case, the fact that the brain is made of living, metabolising cells could well be crucial to how it is that the brain serves consciousness. Consciousness may be medium dependent.

The common view, fostered by the computational framework, is that consciousness and other aspects of cognition are medium independent. The same algorithm can in principle be operated on a range of different material substrates, and so when considering the computational features of a system it is appropriate to ignore the details of the substrate. This works for digital computers where the division between hardware and software is there by design. However, with the neurocomputational framework, the analogical transfer of principles of computer engineering to biology only gives us a partial and highly simplified vision of how the brain operates. It is therefore unwise to ignore the details of material realisation in the living body, however much it is convenient to abstract away from them.

One comment

  1. Pingback: The Brains Blog

Comments are closed.

Back to Top