The Physical Signature of Computation is the most “robust” mapping view that’s ever hit the market. It is impressive in its detail and the careful attention paid to its characterization of both the physical system and the formal computational description—a true service to the philosophical literature. The book promises a “unified account of artifact and biological computation,” but here’s where things take a turn: after handling artifacts, the Robust Mapping Account fades from view and the Mechanistic Account of Physical Computation (in its neurocognitive form) emerges as the way to capture biological computation. To have a unified account, then, requires two distinct approaches to understanding the nature of physical computation. This raises questions about whether the Robust Mapping Account can be extended to neural computation, but most importantly, questions about the compatibly of these two views is questionable given the history of the Mechanistic Account and its underpinning framework.
Importantly, both accounts (the Robust Mapping Account + The Mechanistic Account) have been sold to us as independent answers to the same questions. In other words, they are competitors on the market: each offering distinctly different answers to what has been called the “individuation question”
Individuation: What is the difference between physical systems that compute and those that don’t?
This question has been significantly bound up with what Chalmers (1994) calls the “implementation question.”
Implementation: When does a physical system implement a computational structure?
This feature of the literature—the blending of these questions—has been a source of deep frustration for me over the years. Not only are these questions bound up with one another, but the resulting picture ends up being that a view like the Mechanistic Account has the resources to answer the implementation question, when it doesn’t seem to. The debate has carried on this way for quite some time. However, with this new book, the intermixing of these questions has finally taken a turn for the worse: two answers to the same questions have been given. While there are some differences between The Mechanistic Account proper and the Neurocognitive Mechanistic Account, the mechanistic picture that underpins both views remains the same, meaning the Neurocognitive Mechanistic Account is an application of The Mechanistic Account within a specific context, two views that cannot be cleanly pulled apart.
Because the Robust Mapping Account is framed as answering the implementation question (bound up with the individuation question) and the Mechanistic Account is framed as answering the individuation question (bound up with the implementation question), the authors might consider rethinking the questions that they intend to answer or else it is difficult to understand how these views can be brought together, what questions we are supposed to take them as answering, and whether mappings are cut out to handle neural computations at all.
The Mechanistic account does something quite radical when it came to physical computation. It proposes that we can understand the nature of physical computation by looking to mechanisms rather than providing a mapping view, or a semantic account. I describe this view as the “strong mechanistic account” (Williams, 2023). It is “strong” because it is wholly mechanistic and it this wholly mechanistic picture that makes it so radical.
One of the features that makes the Mechanistic Account wholly mechanistic is its reliance on the concept of medium independence understood as a mechanism sketch. Medium independence is the idea that computational processes “can be defined independently of the physical media that implements them… That is, computational descriptions of concrete physical systems are sufficiently abstract as to be medium independent” (Piccinini, 2015, p. 122). This way of thinking about abstract computational descriptions takes them to be abstract in the sense of abstract-as-omission: “I will call a description more or less abstract depending on how many details it omits in order to express that a system possesses a certain property” (Piccinini, 2015, p. 9). What is very important to notice here (and what makes it radical within the physical computation literature) is that an abstract description—a medium-independent description—is not a formal description—it is not an abstract-as-abstracta description—instead, it is a stripped-down description of one and the same thing: the behaving system in all its complexity. Not much changes, as we will see, when this account is developed further to capture neural computation.
Now that we’ve established that the Mechanistic Account targets physical mechanisms exclusively by giving an abstract-as-omission account of what it is to be a computational mechanism, I want to be clear as to why it does not answer the implementation question. Something important to recognize is that medium independence does not allow us to “abstract away” to the formal computational description. Put differently, you cannot go from an abstract-as-omission description to an abstract-as-abstracta description. Put differently yet again: There is no stepwise ladder from the mechanism to the formal computation. This should be of no surprise. The very nature of the implementation question is that it asks for a theory that draws the relation between these two types of descriptions—if there were such a ladder, a mapping view would be unnecessary.
Unlike the Mechanistic Account, the Robust Mapping Account does give us a bonafide answer to the implementation question. In Chapter 2, the authors describe—in great detail—how we should understand physical descriptions. “Physical descriptions are descriptions of the configuration and dynamics of physical systems, defined as the carriers of physical properties… physical descriptions are articulated within fundamental physical theories” (p. 44). Later in the chapter (Section 2.4) the authors describe the formal definition of computing systems. They say, “computational descriptions are articulated within well-defined computational formalism… [a]n example of a computational formalism is that of finite state automata…” (p. 54). FSAs are a paradigmatic example of an abstract-as-abstracta formal computational description. These two descriptions—the physical system and the formal computation—each make up one side of the implementation relation. Most of the book involves describing how this relation should obtain. It makes good on its promise and I think it does a very nice job.
In Chapter 9 the authors asks whether neurocognitive systems perform physical computations. However, in this chapter they bring in the Mechanistic Account along with work that’s done in Neurocognitive Mechanisms (Piccinini, 2020). After some discussion, the authors arrive at the conclusion that the narrow teleofunctions of neurocognitive systems are to yield outputs that stand in certain mathematical relations to inputs and internal states… and aspects of spike trains that make the most functional difference are rate of transmission and timing. They argue that rate transmission and timing are multiply realizable properties and therefore the vehicles of neural processes are multiply realizable in the right way and since the vehicles are multiple realizable vehicles, they are medium-independent. This leads to the claim that “biological systems whose processes are medium-independent and manipulate vehicles in accordance with a rule are biological computing systems” (Anderson & Piccinini, 2024, p. 250). Let’s unpack this.
On this account, multiply realizability is a relation between higher-level and lower-level aspects where higher-level properties are an aspect of their lower-level realizers. In his 2020 book Piccinini argues “that MR occurs when the same type of higher-level property is an aspect of different types of lower-level properties that constitute different mechanisms” (Piccinini, 2020, p. 39). Contra Fodor (1965; 1968) and Putnam (1960), multiple realization, in this context, is understood in terms of abstract-as-omission descriptions. This is because an “aspect” for Piccinini is “just a part of a property.” Thus, the relation between properties is construed in terms of “parthood,” i.e., part-whole relations (Piccinini, 2020, p. 26). To connect this to what is said above, when the authors state that spike rate transmission and timing are “multiply realizable properties,” they mean this in terms of abstract-as-omission which is why, when the authors say that these features are multiply realizable, they can also say that they are medium-independent.
What does all this mean? Well, it means that when it comes to neural computation, the authors do not seem to “integrate the framework introduced in earlier chapters with recent work in this area to push the debate forward” (Anderson & Piccinini, 2024, p. 233). Instead, the authors transition from answering the implementation question (that has also claimed to answer the individuation question) to giving an account of what it is to be a (neural) computational mechanism in terms of an abstract-as-omission description (that has also been claimed to answer the implementation question). This means that we get two full accounts in the book: one on mappings and one on mechanisms where the former addresses computational artifacts and the latter addresses biological computation.
REFERENCES:
Chalmers, D. (1994). On Implementing a Computation. Minds and Machines.
Fodor, J. (1965). Explanation in Psychology in Philosophy in America. Routledge.
Fodor, J. (1968). Psychological Explanation. Random House.
Piccinini, G. (2020). Neurocognitive Mechanisms. Oxford University Press.
Piccinni, G. & Maley, C. (2021). Physical Computation in The Stanford Encyclopedia of Philosophy.
Piccinini, G. (2015). Physical Computation, A Mechanistic Account. Oxford University Press.
Putnam, H. (1960). Minds and Machines in Dimensions of Mind. New York University Press.
Williams, D. (forthcoming). Two senses of medium independence. Mind & Language.
Williams, D. (2023). Implementation and interpretation: A unified account of physical computation, ProQuest Dissertations & Theses Global. (2865991732).
Thanks to Danielle Williams for her commentary. We will respond to her comments on medium independence in a separate post, to be published on the Brains blog after all the commentaries are up. Here, we address her other comments.
Williams worries that at least when it comes to neural computation, we offer two “distinct” accounts, the robust mapping account and the mechanistic account, which allegedly provide “independent answers to the same questions”. We did indeed bring the mechanistic account into the picture in Ch. 9 to account for computation in neurocognitive systems, which we needed in order assess how much of the mind is computational, which in turn we needed in order to assess whether the universe is fundamentally computational (Ch. 8).
As we state in the book and reiterate in our response to Milkowski’s comments, these two accounts are not independent—they are complementary. Some aspects of the robust mapping account are mechanistic, and we advocate incorporating the robust mapping account within mechanistic accounts. The robust mapping account tells us whether a physical system PS exhibits the physical signature of a computation C, whether or not PS has the teleofunction of performing C. In contrast, the mechanistic account applies to functional mechanisms—systems that have mechanistic organization and fulfill teleofunctions. Combining the two accounts provides an account of physical computation in systems, such as computing artifacts and neurocomputational systems, that both exhibit the physical signature of C and have the teleofunction of performing C. To fulfill this teleofunction, a mechanism must bear a genuine physical signature of C.
In her commentary, Williams finds it unclear whether these two accounts answer the individuation question (to tell the difference between physical systems that compute and those that don’t) or the implementation question (to tell when a physical system implements a computational system), and she thinks this makes it difficult to see how the two accounts “can be brought together.” Here is what our book’s introduction says about the implementation and individuation questions:
<<[W]e present our project as an account of computational implementation—what it takes for a physical system to implement, or realize, a computing system and the computations it can perform. Philosophers sometimes prefer to talk about computational individuation—i.e., what it takes to distinguish one physical computing system from another. We take implementation and individuation to be two sides of the same coin. If physical computations are individuated by properties of types P1, P2, . . . Pn, then physical systems must have P1, P2, . . . Pn in order to implement computations. Conversely, if it takes properties of types P1, P2, . . . Pn to implement computations, then physical computing systems are individuated by P1, P2, . . . Pn. For instance, if computations were individuated in part by certain semantic properties, then physical systems would have to have such semantic properties in order to implement computations, and vice versa. Thus, while we will focus on implementation, our account is also an account of computational individuation. We will say that a physical system bears the physical signature of computation C if and only if it possesses all of the requisite properties Pi for implementation of C—and thus for individuation of C as the implemented computation—and we devote much of this book to identifying the property types that are appropriate and adequate to this task (p. 7).>>
Thus, our robust mapping account answers both the implementation and the individuation questions, because we take them to be two sides of the same coin. Ditto for the mechanistic account.
Strictly speaking, Williams defines the individuation question slightly differently: “What is the difference between physical systems that compute and those that don’t?” Given Williams’s formulation, according to the robust mapping account the individuation question is answered by whether a physical system robustly implements any computation and the implementation question is answered by which computation or computations a physical system implements robustly. Analogously, according to the mechanistic account, the individuation question is answered by whether a system is a mechanism with a teleofunction of performing any computation and the implementation question is answered by which computation or computations it has the teleofunction to perform.
Williams seems to find something problematic with viewing the implementation and individuation questions as two sides of the same coin. Specifically, she argues that the mechanistic account does not answer the implementation question because it involves “abstract-as-omission descriptions” while the implementation question requires “formal descriptions”, i.e., “abstract-as-abstracta descriptions”. Williams objects that “you cannot go from an abstract-as-omission description to an abstract-as-abstracta description”.
Williams’s concern may be interpreted in two ways. If the concern is that abstracting by omitting details about a physical system is not going to get us to abstract mathematical objects that may exist eternally outside of space and time, as they do according to platonism about mathematics, we agree. Yet this is not a problem we are concerned with. To be fair, sometimes others define implementation as a relation between physical systems and abstract objects in the platonistic sense. That begs the question of whether there are such abstract objects and whether platonism is the correct ontology of mathematics. To avoid begging the question of platonism (which our project does not require an answer to), we took pains to formulate the implementation question without presupposing platonism.
We are officially neutral about whether there are any abstract objects and how we may or may not refer to them. As we write in the introduction,
<<[W]e will frame our project in ways that avoid commitment to abstract objects—specifically, we will talk about mathematical definitions and descriptions rather than mathematical objects, leaving it open whether these definitions and descriptions refer to abstract objects, (possibly hypothetical) physical objects, or nothing at all (p. 15).>>
Thus, we set aside whether there are any abstract objects in the platonistic sense and whether any mathematical descriptions refer to them. We are concerned exclusively with implementation as a relation between physical systems and computational definitions. Our question is whether mathematical definitions of computing systems, such as definitions of finite state automata, apply to physical systems in a way that warrants attributions of computations to such systems.
We are now ready to address the second interpretation of Williams’s concern. If Williams’s concern is that abstracting by omitting details about a physical system cannot get us to abstract computational definitions, we disagree. Abstract computational definitions apply to physical systems, and warrant attributions of computations to those systems, insofar as physical systems exhibit physical signatures of the corresponding computations. When that happens, the computational descriptions do capture important aspects of the physical systems (cf. Sect. 3.3.4 of our book). These aspects can be found by abstracting away from other aspects—that is, the constitutive aspects (materials, components, specific forces)—of the physical systems in question, and finding that what remains either fits an existing kind of abstract computational description or deserves to be regarded as a new one.
To summarize, if a physical system exhibits the physical signature of a computation, omitting its constitutive aspects can lead to its computational aspects. This point applies within our robust mapping account as well as within mechanistic accounts: you can go from a constitutive physical description to an abstract mathematical description by omitting the constitutive details, and if that mathematical description defines a computation, then you have gone from a constitutive physical description to a mathematically defined computation after all. If that happens, in Williams’s terms (and with mathematical platonism set aside), an “abstract-as-omission description” of a physical system coincides with an “abstract-as-abstracta description” of a computing system. If it does so robustly, in accordance with our robust mapping account, then the constitutively defined physical system implements — and bears the physical signature of — the abstractly defined computing system.
— Neal Anderson and Gualtiero Piccinini
Pingback: The Brains Blog