Modeling has come to occupy a central place in philosophy of science. In recent decades, an enormous amount has been written on the practices of model construction, how models represent their targets, how models relate to simulations and theories, and how models are validated and verified.
Much of this work has focused on biology, climate science, and physics. Philosophy of psychology, however, has been slow to pick up on the trend. In part this reflects its estrangement from many topics within the broader literature of philosophy of science. Even so, this neglect might be justified if psychology simply didn’t make much use of models—after all, not every science is equally “model-heavy.”
A glance is enough to show that this isn’t true. Psychologists routinely construct and test models of processes, systems, architectures, and other cognitive elements that they study. These models come in many forms, including verbal descriptions, systems of equations, “boxological” diagrams, and computer programs, and they are used to explore and test hypotheses, to fit patterns in the experimental data, and to explain the phenomena. This makes it all the more puzzling that these diverse modeling practices have been relatively ignored by philosophers.
To the extent that modeling has entered debates over psychological explanation, it has been dominated by the mechanistic paradigm. A lot of models in biology and neuroscience are mechanistic, so it would be extremely convenient if this already well-understood framework could be straightforwardly applied to psychology as well. It would also provide a way of unifying all of these sciences if their models could be integrated into a multifield mechanistic hierarchy.
I don’t think this sort of integration is in the cards, however, for a number of reasons.
First, psychological models aren’t mechanistic. That claim may seem surprising, since diagrammatic models at least strongly resemble standard representation of mechanistic models, in that they contain hierarchically nested components and their connections. They therefore seem to be sketches or proto-mechanistic proposals, in need of further fleshing out, perhaps, but nevertheless still instances of the same mechanistic explanatory pattern that shows up in neuroscience and elsewhere.
This thought reflects a confusion between two related kinds of models: mechanistic models and the broader class of componential causal models. Psychological models are the latter, but not the former. The difference can be unpacked by recalling Herbert Simon’s (1996) original account of hierarchically organized complex systems, which can be interpreted in two quite different ways. On one view, complex systems are organized according to mereological relations. Subsystems are physically parts of their higher-level containing systems. Pursuing this spatial conception of hierarchy leads us eventually to the mechanistic conception of explanation, on which descending a level involves shifting focus to the smaller entities and activities that are contained within the larger system as a whole.
The spatial arrangement of components and processes with respect to each other is crucial to mechanistic explanation. If the parts were rearranged, or in some cases even displaced a tiny bit, the mechanism as a whole would fail to function. Mechanistic modeling needs to be sensitive to these spatial (and temporal) details.
Psychological models, however, are spatially indifferent. The placement of the components in a diagram of the system’s structure need bear no relationship at all to their locations with respect to each other in space. The rationale for this indifference is that these models describe their components in terms of their functions, rather than their physical characteristics. In Alan Baddeley’s working memory model, for instance, the phonological loop, the visuospatial sketchpad, and the episodic buffer are all separate components defined by their causal roles, but these roles are silent on realization-level issues (Baddeley, 2012). Psychological models capture causal structure in abstraction from the details of layout.
This comports with the second interpretation of a complex system in Simon: systems are defined by the degree of interactivity of their components. In Simon’s examples, social and economic systems (political organizations, families, banks, government institutions, etc.) are made of integrated parts that interact with one another to form larger wholes. But it is the strength of their interactions, not spatial proximity, that determines their overall boundaries. A functional subsystem is a maximally coupled or integrated information processor, even if it is spatially distributed.
It’s sometimes objected that these psychological models are essentially incomplete, no more than sketches of mechanisms that will be filled in more completely once they are fully integrated with models of neuroscientific mechanisms (Piccinini & Craver, 2011). But no matter how illuminating it would be to understand such mappings, a psychological model can nevertheless be complete without being mapped onto a neural model.
What determines the completeness of a model are its descriptive accuracy vis-à-vis its target domain, and its ability to explain the relevant phenomena within that domain. A model can be an accurate representation of the functional organization of a psychological system without commenting on how that system is neurally realized. And a sufficiently accurate model can also be used to generate explanations of the behavior of that system even without such information. A psychological model’s accuracy and explanatory power depend only on whether it gives us a good picture of the mind’s causal levers and switches. If the psychological components that the model posits are available for interventions and manipulations, if they can be detected by a range of converging techniques (that is, if they are robust), and if their operations would produce a wide range of the phenomena, then the model can meet all of our ordinary explanatory demands.
This form of explanatory autonomy shouldn’t be confused with methodological autonomy, which states that there is a privileged set of methods and sources of evidence for psychology. That claim was mistaken when behaviorists made it, and it is no less mistaken when advanced by cognitivists. Psychological models may draw on any source of evidence for their confirmation, including neuroscientific evidence (imaging, lesion studies, noninvasive interventions, etc.). What confirms a model is one thing, whether it is explanatorily adequate is another.
So, attention to modeling practices in psychology tends to uphold two core functionalist claims: that psychological models characterize the functional organization of the mind in a way that abstracts from its neural basis, and that these models are explanatorily autonomous from models of the system’s underlying neural structure.
However, this leaves it open just how far apart psychological and neural models can drift in terms of how they represent the causal structure of the mind/brain, and how we should reconcile their disagreements about this structure when they arise. For those who will be at the upcoming PSA meeting, I’ll be discussing this as part of a symposium on Unifying the Mind-Brain Sciences, along with Jackie Sullivan, Gualtiero Piccinini and Trey Boone, and Muhammad Ali Khalidi.
Baddeley, A. D. (2012). Working memory: Theories, models, and controversies. Annual Review of Psychology, 63, 1-29.
Piccinini, G., & Craver, C. F. (2011). Integrating psychology and neuroscience: Functional analyses as mechanism sketches. Synthese, 183, 283-311.
Simon, H. (1996). The Sciences of the Artificial (3rd ed.). Cambridge: MIT Press.