The Physical Signature of Computation (PSC) builds on previous publications by Gualtiero Piccinini that have shaped the debate over the nature of information processing in the brain. I will focus on this topic, the subject of Chapter 9, and discuss how PSC inherits some of the flaws embedded in the earlier work, especially the assumption that the majority of signaling in the brain is medium independent. In a comment on Piccinini’s previous book, Neurocognitive Mechanisms, I stated the empirical case that signaling in the brain,[1] including spiking, is dominantly medium dependent (Chirimuuta 2022). PSC does not, unfortunately, take the weight of this empirical evidence on board. But rather than repeating myself I would here like to state an in principle case as to why neural signaling is not medium independent.
Digital computation is the paradigmatic medium independent form of computation:
“The rules defining digital computations are defined in terms of strings of digits and internal states of the system, which are simply states that the physical system can distinguish from one another. No further physical properties of a physical medium are relevant to whether they implement digital computations. Thus, digital computations can be implemented by any physical medium with the right degrees of freedom” (Piccinini 2015, 123)
In order for a physical computational or signalling system to be medium independent, there must be a categorical demarcation between the set of its properties that map onto the specified code, and ones that are irrelevant to the code. In an electronic computer implementing a binary code, the relevant properties are the two specific voltage ranges that map onto 0 and 1. All else is irrelevant to the actual computation.
In an artefact such as a digital electronic computer, this demarcation is built in by design. For example, with in a range corresponding to the reasonable expectations of the machine’s user, the voltages of the transistors will not be affected by fluctuations in the temperature of the components, or the effects of changes in atmospheric pressure, etc., in such a way that these code-relevant physical states are no longer distinguishable from one another. However, the enforcement of this demarcation is costly in terms of resources taken to build and run the machine. For what it entails is that most of the degrees of freedom of the physical system are irrelevant to the computation and need to be maintained steady or kept from affecting the processes that do map on to the abstract computation; this means that most of the physical properties of the system are ‘wasted’ (i.e. not utilised for information processing) and resources have to be spent on keeping those irrelevant properties of the machine from affecting the computationally relevant ones – e.g. resources spent on building and running fans to prevent electronic computers from overheating.
Medium independence is in principle never the thrifty solution to information processing needs precisely because, for any given physical system, it does not maximise the information processing potential of that system. This principle is nicely explained in Sarpeshkar’s (1998) comparison of the resource costs of digital vs. analogue computation (see discussion in Chirimuuta 2018). We will bracket here Piccinini’s contestable position that analogue computation is medium independent (Maley forthcoming).
No-one can dispute that resource (especially energy) consumption is a strict constraint in the ‘design’ of neural systems. It is thus, in principle, to be expected that evolution would have arrived at medium dependent and not medium independent solutions. Currently, the starkest difference between biological cognition and AI technology (i.e. artificial neural networks, ANNs) is in the scale of the energy demands needed to train and operate ANNs, which to date run on parallelised digital processors. It is a stark illustration of the difference between medium dependent and medium independent information processing.
An analogy will help convey how this difference comes about. Imagine that the positions of members of a flock of sheep on a hillside are to be used to convey information about something or other. Strategy 1 (analogous to medium independence) is to build a network of paths bounded by electric fences such that the positions of the sheep are constrained by the pre-stated demands of the code. The degrees of freedom of the sheep are reduced through an input of resources – the material and energy needed to build and power the electric fences. At the same time, the code is medium independent because the sheep could be replaced by any kind of object that could take up a position on the path, thereby maintaining the same structural features required for the code. The strategy does not attempt to leverage the specific features of the sheep or the hillside when designing and implementing the code.
Strategy 2 (analogous to medium dependence) lets the sheep roam freely but slowly adapts a sender-receiver convention such that the unrestrained movement of the sheep conveys the messages appropriately. This strategy does not limit the degrees of freedom of the sheep and, importantly, it does not entail a separation between code-relevant and code-irrelevant properties of the sheep and hillside, since the flock positions will be causally influenced by an innumerable range of factors such as terrain, weather and the hunger of the sheep. There is therefore no demarcation between code relevant and code irrelevant properties of this whole system. The point is that through the code exploiting the unrestricted degrees of freedom of the sheep, the code conveys more information, more cheaply, than possible with the other strategy. This is the slowly adaptive, improvisational, and thrifty way that evolution works. But you cannot swap ‘vehicles’ (e.g. replace sheep with goats) without changing what and how the information is conveyed, given that the unconstrained behaviour of the different kinds of animals will be quite different.
What this analogy seeks to make clear is that evolution has ensured the energy efficiency of information processing in the brain by working with, not against, an unrestricted number of the physical properties of the material that the brain is made of. This is not compatible with the principles of medium independence. For example, Anderson and Piccinini write that,
“medium-independent properties can occur at any spatiotemporal scale. … [M]edium-independent properties can involve vehicles and components of any size at any temporal scales compatible with the right degrees of freedom.” (PCS: 246)
It is completely implausible that biological cognition operates in this way. Resource efficiency is only ever achieved by leveraging the nuanced, scale-dependent properties of a physical system. If a code maps to physical properties in such a way that it can be implemented at any scale, it will not be an energy efficient solution at any one of those scales.
In order to better understand the brain, we should move beyond the notions inherited from digital computation, such as medium independence, and come instead to conceive of information processing in the brain as being the result of a slow adaptive process in which the pre-existing physical properties of biological cells were leveraged to convey specialised codes. To draw again from the above analogy, this will have involved a certain amount of resource expenditure in order to influence the paths of sheep (like altering the energy landscape of the hillside by putting in bumps and ditches here and there), and to increase the travel speed of the sheep themselves – note that spiking along neuronal axons is believed to be an adaptation to the demand for fast signalling.
Finally, I want to return to the point that with medium dependent signalling we may lack a clear demarcation between the code-relevant and code-irrelevant properties of the physical system. What this entails is that there will not be one level of abstraction at which we can represent all the code-relevant properties. Any abstraction to a certain set of properties will exclude some of the code-relevant ones. This is the opposite of how it is with a medium-independent system. As Anderson and Piccinini write,
“Physical computation is a process that becomes evident once the appropriate abstraction (omission of detail) is performed …. Abstraction is a feature of all physical descriptions…. The relevant question is which abstraction is needed to capture the physical signature of computation. The needed abstraction is the selection of a system’s relevant physical states and state transitions to be mapped onto computational states and state transitions. With the right mapping in place, the property that manifests in genuine physical computing systems is the dynamical structure that enables physical-computational equivalence, which is a medium-independent property” (PSC:244)
An abstract model of a medium-independent computing system has the unique privilege of potentially capturing all the relevant properties, despite its omission of details.
In my comment on Neurocognitive Mechanisms I described how a medium-dependent signalling system could usefully be considered in terms of Haugeland’s (1981/1998) second kind of analog system, one in which no simulation allows complete and accurate prediction of the system’s behaviour (Chirimuuta 2022). Piccinini’s (2022) reply missed the point. The argument is about the limitations of abstraction when applied to biological systems, not the question of whether or not scientists can build simulations of them. Of course, any material object can be modelled in a computer simulation, and the simulation will be an abstraction. Haugeland’s point is that an abstract simulation of a physiological network will have to leave out some details and we cannot safely assume that those details will be irrelevant to the prediction we need to make (e.g., on the toxicity of a novel drug).[2] I am transferring this point to the case of medium-dependent signalling in the brain. The safe assumption is that there are no code-irrelevant properties, strictly speaking, which means that the abstraction is at best a partial representation. As Anderson and Piccinini acknowledge, partiality of abstraction is the norm in science. The brain is not the exception.
I would like to thank Corey Maley and Mark Sprevak for their insightful comments on these ideas.
REFERENCES
Chirimuuta, M. 2018. “Explanation in Computational Neuroscience: Causal and Non-causal.” British Journal for the Philosophy of Science 69 (3):849 – 880.
Chirimuuta, M. 2022. “Comment on Neurocognitive Mechanisms by Gualtiero Piccinini.” Journal of Consciousness Studies 29 (7-8):185-94.
Haugeland, John. 1981/1998. “Analog and Analog.” In Having Thought: Essays in the Metaphysics of Mind, 75-88. Cambridge, MA: Harvard University Press.
Maley, Corey J. forthcoming. “Declaring Independence from Medium Independence.” Mind and Language.
Pappalardo, Francesco, Giulia Russo, Flora Musuamba Tshinanu, and Marco Viceconti. 2019. “In silico clinical trials: concepts and early adoptions.” Briefings in Bioinformatics 20 (5):1699–1708.
Piccinini, Gualtiero. 2015. Physical Computation: A Mechanistic Account. Oxford: Oxford University Press.
Piccinini, Gualtiero. 2022. “Neurocognitive Mechanisms: Some Clarifications.” Journal of Consciousness Studies 29 (7-8):226-50.
Sarpeshkar, R. 1998. “Analog versus Digital: Extrapolating from Electronics to Neurobiology.” Neural Computation 10:1601–38.
[1] In what follows I talk of “signalling” and “information processing” in the brain in order to leave the question open as to whether the brain is or is not performing computations.
[2] In his response to me Piccinini (2022) misleadingly suggests that simulation drug trials are now being proposed to fully replace in vivo trials. This is not what is claimed in the references he cited (e.g. Pappalardo et al. 2019). Haugeland’s (and my) argument is not that simulations of physiology have no use, but that those abstractions could never be made with enough confidence to remove the need for actual drug trials.
Thanks to Mazviita Chirimuuta for her commentary. We will respond in a separate post, to be published on the Brains blog as soon as it’s ready.
— Neal Anderson and Gualtiero Piccinini