Modeling and the autonomy of psychology

Modeling has come to occupy a central place in philosophy of science. In recent decades, an enormous amount has been written on the practices of model construction, how models represent their targets, how models relate to simulations and theories, and how models are validated and verified.

Much of this work has focused on biology, climate science, and physics. Philosophy of psychology, however, has been slow to pick up on the trend. In part this reflects its estrangement from many topics within the broader literature of philosophy of science. Even so, this neglect might be justified if psychology simply didn’t make much use of models—after all, not every science is equally “model-heavy.”

A glance is enough to show that this isn’t true. Psychologists routinely construct and test models of processes, systems, architectures, and other cognitive elements that they study. These models come in many forms, including verbal descriptions, systems of equations, “boxological” diagrams, and computer programs, and they are used to explore and test hypotheses, to fit patterns in the experimental data, and to explain the phenomena. This makes it all the more puzzling that these diverse modeling practices have been relatively ignored by philosophers.

To the extent that modeling has entered debates over psychological explanation, it has been dominated by the mechanistic paradigm. A lot of models in biology and neuroscience are mechanistic, so it would be extremely convenient if this already well-understood framework could be straightforwardly applied to psychology as well. It would also provide a way of unifying all of these sciences if their models could be integrated into a multifield mechanistic hierarchy.

I don’t think this sort of integration is in the cards, however, for a number of reasons.

First, psychological models aren’t mechanistic. That claim may seem surprising, since diagrammatic models at least strongly resemble standard representation of mechanistic models, in that they contain hierarchically nested components and their connections. They therefore seem to be sketches or proto-mechanistic proposals, in need of further fleshing out, perhaps, but nevertheless still instances of the same mechanistic explanatory pattern that shows up in neuroscience and elsewhere.

This thought reflects a confusion between two related kinds of models: mechanistic models and the broader class of componential causal models. Psychological models are the latter, but not the former. The difference can be unpacked by recalling Herbert Simon’s (1996) original account of hierarchically organized complex systems, which can be interpreted in two quite different ways. On one view, complex systems are organized according to mereological relations. Subsystems are physically parts of their higher-level containing systems. Pursuing this spatial conception of hierarchy leads us eventually to the mechanistic conception of explanation, on which descending a level involves shifting focus to the smaller entities and activities that are contained within the larger system as a whole.

The spatial arrangement of components and processes with respect to each other is crucial to mechanistic explanation. If the parts were rearranged, or in some cases even displaced a tiny bit, the mechanism as a whole would fail to function. Mechanistic modeling needs to be sensitive to these spatial (and temporal) details.

Psychological models, however, are spatially indifferent. The placement of the components in a diagram of the system’s structure need bear no relationship at all to their locations with respect to each other in space. The rationale for this indifference is that these models describe their components in terms of their functions, rather than their physical characteristics. In Alan Baddeley’s working memory model, for instance, the phonological loop, the visuospatial sketchpad, and the episodic buffer are all separate components defined by their causal roles, but these roles are silent on realization-level issues (Baddeley, 2012). Psychological models capture causal structure in abstraction from the details of layout.

This comports with the second interpretation of a complex system in Simon: systems are defined by the degree of interactivity of their components. In Simon’s examples, social and economic systems (political organizations, families, banks, government institutions, etc.) are made of integrated parts that interact with one another to form larger wholes. But it is the strength of their interactions, not spatial proximity, that determines their overall boundaries. A functional subsystem is a maximally coupled or integrated information processor, even if it is spatially distributed.

It’s sometimes objected that these psychological models are essentially incomplete, no more than sketches of mechanisms that will be filled in more completely once they are fully integrated with models of neuroscientific mechanisms (Piccinini & Craver, 2011). But no matter how illuminating it would be to understand such mappings, a psychological model can nevertheless be complete without being mapped onto a neural model.

What determines the completeness of a model are its descriptive accuracy vis-à-vis its target domain, and its ability to explain the relevant phenomena within that domain. A model can be an accurate representation of the functional organization of a psychological system without commenting on how that system is neurally realized. And a sufficiently accurate model can also be used to generate explanations of the behavior of that system even without such information. A psychological model’s accuracy and explanatory power depend only on whether it gives us a good picture of the mind’s causal levers and switches. If the psychological components that the model posits are available for interventions and manipulations, if they can be detected by a range of converging techniques (that is, if they are robust), and if their operations would produce a wide range of the phenomena, then the model can meet all of our ordinary explanatory demands.

This form of explanatory autonomy shouldn’t be confused with methodological autonomy, which states that there is a privileged set of methods and sources of evidence for psychology. That claim was mistaken when behaviorists made it, and it is no less mistaken when advanced by cognitivists. Psychological models may draw on any source of evidence for their confirmation, including neuroscientific evidence (imaging, lesion studies, noninvasive interventions, etc.). What confirms a model is one thing, whether it is explanatorily adequate is another.

So, attention to modeling practices in psychology tends to uphold two core functionalist claims: that psychological models characterize the functional organization of the mind in a way that abstracts from its neural basis, and that these models are explanatorily autonomous from models of the system’s underlying neural structure.

However, this leaves it open just how far apart psychological and neural models can drift in terms of how they represent the causal structure of the mind/brain, and how we should reconcile their disagreements about this structure when they arise. For those who will be at the upcoming PSA meeting, I’ll be discussing this as part of a symposium on Unifying the Mind-Brain Sciences, along with Jackie Sullivan, Gualtiero Piccinini and Trey Boone, and Muhammad Ali Khalidi.



Baddeley, A. D. (2012). Working memory: Theories, models, and controversies. Annual Review of Psychology, 63, 1-29.

Piccinini, G., & Craver, C. F. (2011). Integrating psychology and neuroscience: Functional analyses as mechanism sketches. Synthese, 183, 283-311.

Simon, H. (1996). The Sciences of the Artificial (3rd ed.). Cambridge: MIT Press.


  1. This is a very interesting and helpful post. Thanks Dan! Now I wish I could make the PSA meeting!

    Many psychologists think of descriptive statistics in terms of models. That is, determining correlation, significance, effect size, confidence intervals, etc. is, ultimately the result of comparing two algebraic models: the null hypothesis and some other set of hypotheses. In many cases these models are VERY simple — nothing like the thorough-going functional or mechanistic models I think you have in mind. They are just algebraic expressions, e.g.,

    Hypothesis: Concern About Disease X = [B] + [Whether-media-covers-disease-X]B1 + error
    null hypothesis: Concern About Disease X = [Average Concern] + error

    Although these models are not the functional or mechanistic models I take you to be talking about, they are still common place in the social sciences. However, these simple models would probably do a poor job of meeting your criteria of robustly capturing the causal features of the mind. So I wonder what will you want to say about the use of these models (and of descriptive statistics in general) in psychology. It sounds like you would want to say that these are impoverished psychological models if they are psychological models at all. Is that right?

    Thanks again!

    • Dan Weiskopf

      Hi Nick, great question. To clarify, while some models in psychology are cognitive models, not all of them are. A lot of them are data models: attempts to use descriptive statistical devices such as the ones you’ve mentioned to concisely summarize a body of experimental results. Cognitive models, on the other hand, embody attempts to represent the systems, representations, processes, and resources of the mind, or some subpart thereof. Data models have important roles to play in discovering robust phenomena, but they may not play the explanatory role that cognitive models often do.

  2. Gualtiero Piccinini

    The point that many psychological models abstract away from spatial relations between components is interesting but doesn’t show that they are not mechanistic. Mechanistic models may contain lots of abstractions, and one possible abstraction is abstraction from spatial relations. This kind of abstraction occurs in many domains other than psychology. Of course a mechanism that ignores spatial relations is more sketchy than one that doesn’t.
    (As to purely statistical models per comment by Byrd, those are not mechanistic.)
    Dan, I look forward to seeing you at our PSA session.

    • The question of ‘mechanism sketches’ has to be the question of what gets abstracted how, and as you point out, the differences Dan alludes to doesn’t seem to preclude the possibility that it’s simply a different kind of abstraction at work. In other words, merely pointing out differences in the information rendered available in functional analyses vs. mechanism sketches can’t do the trick.

      But this is where Dan’s introduction of Jenkins a couple posts back becomes decisive – at least in terms of what I think it means! A ‘mechanism sketch’ is a quite specific kind of heuristic device, one that allows us to engage what is actually going on in the brain in a synoptic manner. So to riff on Gigarenzer’s terminology again: mechanism sketches belong to what could be called an ‘open problem-ecology,’ a heuristic that becomes capable of solving more and more problems as the details are filled in.

      But the kind of heuristic model that Jenkins is (almost) describing seems to be more like a *key* than a sketch. Rather than an approximation of mechanisms, it is actually itself a mechanism, a device that allows researchers to ‘lock into’ the (ultimately mechanical) systematicity of the brain in ways that solve for specific kinds of problems. In this case, the differences Dan is highlighting are entirely a matter of heuristic function, not information.

      So on my admittedly extreme brand of eliminativism, the way to solve the impasse between the psychological functions and the neural mechanisms is simply to conceive the former in mechanistic terms, as a type of cognitive device that ‘hacks’ or ‘unlocks’ certain systematicities in manner specific to the solution of specific problems (or ‘closed problem ecologies’) This, then, is the ‘mechanism sketch’ that would most pertain to functional analyses, the one that outlines a certain heuristic interaction *between brains,* the one solving, the other solved.

      Someone say hi to Jackie for me!

  3. Gualtiero Piccinini

    Sorry. In the comment above I meant to be typing “a mechanistic model that ignores spatial relations…” not “a mechanism that ignores spatial relations…”

  4. What Gualtiero said. If I read them (Piccinini & Craver 2011) correctly, a componential causal model just is a kind of mechanistic model because its accuracy is answerable to the mechanistic (neural) facts. For example, if it turned out there was no neural component corresponding to a particular box in a boxological model, it would undermine that boxological explanation. If they’re right, that seems to defeat explanatory autonomy: the “complete” functional explanations you mention wouldn’t be good explanations if they failed to describe interactions among actual mechanistic (neural) components.

    I think they’re right that a componential causal model is answerable to the neural facts. As P&C put it, “if functional analysis by internal states is watered down to the point that it no longer makes any commitments to the behavior of components, then it is no longer possible to distinguish explanations from merely predictively adequate models and phenomenal descriptions of the system’s behavior.”

    In other words, a functional analysis needs to aim to capture the actual causal structure of the system, and that’s sufficient for it to be a mechanism sketch.

    Now, there would be a problem for P&C if it’s trivial for a functional analysis to capture the actual causal structure, such that the existence of an actual mechanistic “component” is trivially satisfied. The stringent spatial requirements you proposed would avoid triviality, but Gualtiero doesn’t want to accept those. (From the paper: “components need not be neatly localizable, visible, or spatially contained within a well defined area.”)

    So what do they think it would take for e.g. the phonological loop to fail to correspond to a mechanistic component? I think they cite robustness here: “Any robustly detectable configuration of structural properties might count as a component.” Now you identify robustness as important too, but for identifying a genuine functional component rather than a mechanistic component. So now I’m wondering whether you’re all zeroing in on the same thing, but P&C are calling it a mechanistic component, while you’re not. A mere terminological dispute, in part?

    • Dan Weiskopf

      Hi Dan, I’m trying to roll together a response to you and Gualtiero in one comment, since you seem to be advancing basically the same objections.

      First, take Gualtiero’s claim that a componential causal model (CCM) that is spatially agnostic is invariably a sketchy mechanistic model. To call it a sketch is to say that it’s missing something it should have, namely a mapping onto local spatial structures that correspond (in some way) with the components of the abstract model.

      However, my claim is that psychological models, by their very design, don’t care about spatial localization issues. They care about the functional design plan of the cognitive system. Where and when and (in the neural sense) how it happens might be nice to know, for all sorts of reasons, but it isn’t essential to the purpose of such modeling. So to call them “sketches” is to impose a standard of informational completeness that they aren’t even trying to meet. That’s one reason that I object to the sketch talk here.

      Let me elaborate on this point a bit to address the worry that a cognitive model that lacks a corresponding neural model would in some sense lack explanatory power. Suppose that we had a model of working memory that was far more detailed than Baddeley’s about the precise representations, processes, and control structures that go into encoding information in various buffers, maintaining it under different conditions, integrating it with long term memory, and so on. A model like this might give a perfectly accurate picture of the psychological structure of the working memory system–that is, every relevant functionally defined component would be present and their interactions could be traced to show how the relevant phenomena are produced.

      Now, if this model can display the causal structures that produce the phenomena–if it is an accurate representation of this part of the mind’s structure–then it can generate perfectly good explanations of the phenomena. An accurate model of the causal structure within a target domain is ipso facto capable of producing explanations of phenomena in that domain. That’s the sense in which cognitive models are explanatorily autonomous: you can use them to generate normatively adequate explanations even if you don’t know anything about how the target system might be described in neural terms.

      Putting it in general terms: you can explain a phenomenon by appeal to a certain structure S without knowing the details of the structures that underlie S.

      It’s for this reason that I want to resist your parenthetical assimilation of “actual mechanistic” components to “neural” components. There can be actual components of a psychological model that are not actual neural components, by which I mean they don’t correspond to anything clean or very systematic at the neural level. At least they don’t correspond to anything that would look much like the traditional mechanistic “parts”.

      I also think that the quote of Gualtiero’s that you cite in which he allows that the notion of a “part” of a mechanism can be distributed, untidy, invisible, etc., is a crucial one. I think it marks a tacit, but deeply significant, shift in the nature of mechanistic explanation. From the beginning, mechanistic explanation has been deeply intertwined with localizable (“modular” in the broad sense) components with specific functions. This makes the emphasis on spatial delimitation of parts something that’s pretty important to the program.

      If mechanists now want to loosen up a bit and claim that a part can be something smeared out and irregular, possibly even gerrymandered, that really should be noted explicitly, since it runs the very significant risk of weakening the notion of a mechanistic explanation to the point of triviality. That a system is mechanistically explicable is a substantial thesis about its internal organization, but if we start to abandon the rules about what counts as a mechanistic part, then we also abandon the substance of the mechanistic program. So I think mechanists should stay a bit more conservative with the notion of parthood, and simply allow that there are some CCMs that are non-mechanistic.

      Stepping back for a second, I think that this illustrates a general point about modeling. A famous example by Paul Teller (2001) may help here. He points out that in explaining the behavior of water, we can look at it from a number of different points of view. For some tasks (flow and wave propagation), we may model it as a continuous incompressible medium; for others (diffusion), we may model it as made of discrete particles in thermal interaction. A quantum mechanical model of a body of water might be able to explain why and where these other models fall short of being perfectly accurate, but we cannot use quantum mechanics to give explanations of those phenomena themselves; they are far too complex for us to construct a good QM model of them. Margaret Morrison (2011) has made a similar point with respect to models of the atomic nucleus.

      So we have a situation in which we have several models of the system, each of which has legitimate explanatory virtues, but none of which can be reduced to the others, or even integrated within a single whole. I suspect that understanding the mind/brain will turn out to be more or less like understanding water.

      • Hi Dan (and others),
        Thank you for all these excellent and thought-provoking posts. While I think your model-based defense of psychology’s autonomy is much more plausible and sophisticated than the traditional accounts, it still shares the core feature that has always bothered me most deeply: the assumption that the modeler knows ahead of time and can strictly delimit which features of the phenomena are explanatorily relevant. A crucial premise in your response to Gualtiero and Dan is that while more spatial information might be nice to have (I would prefer to say would enhance the power of these models by allowing us to capture more counterfactuals), this information (and the additional counterfactuals) is not “essential” to the explanatory success of the model.

        But why should we believe this premise (or accept any such essentialism about explanation)? Modelers will sometimes say things that could be interpreted as endorsing this premise whenever another researcher complains about, e.g., the biological plausibility of their model or the fact that another model explains more data, but rarely does anyone but the embattled modeler accept this move as a trump card in empirical debates. And there are good general reasons for supposing that we shouldn’t be totally agnostic about spatial relations for even a functional cognitive model that comes with no obvious spatial interpretation, because it just isn’t the case that functional properties place no spatial constraints on systems that implement them. (If the realizers of functional components are to interact with one another in the specified ways, then they need to actually be spatially relatable in a way that allows for those interactions; the richer the functional profiles, the more implicit spatial and structural constraints may be relevant. The nature of these constraints may be highly complex, but it just isn’t the case that they are irrelevant, or that we could know before doing the science that no such relevant constraints will be forthcoming.)

        Really, researchers investigating a phenomenon are rarely (never?) in a position to fully articulate the explanatorily relevant aspects of the phenomena before the science has been done. And surely the science regarding any remotely interesting cognitive capacity is far from done. We could just allow a modeler to specify by fiat the aspects they intend their model to explain, but it’s not clear that this should be up to them. There are good reasons for thinking that this kind of parochialism by fiat is unhealthy for empirical debate, because any modeler could just declare that the only relevant aspects of the phenomena are the ones that their model happens to capture, insulating every model from competition with other models in an relatively unconstrained way. Independently legitimate cognitive models would proliferate in psychology without ever saying anything mutually relevant to one another’s evaluation. On this picture of cognitive psychology, what sense is to be made of the fact that empirical researchers so frequently productively disagree? Surely sometimes these disagreements are otiose (your work on the “uniformity assumption” in the concepts debate being an excellent case in point), but I take it that in the typical case these disagreements are productive and can lead to improvement of models. (Perhaps you planned to address all this when you talk about the “many models” problem…)

        More broadly, I think we need to articulate a set of rules as to when this kind of explanatory parochialism is acceptable. If mechanists and functionalists are to avoid begging the question against one another, it would be nice if the rules could be cashed out in terms of something ecumenical like counterfactual power. If some counterfactuals could be captured by a functional model but not by a spatial elaboration of that functional model, then this would be a good argument against that spatially-elaborated interpretation. But if this isn’t the route that the autonomy defense takes, we need to be given a better reason why we shouldn’t care about the additional counterfactual power that can be gained by spatial elaboration, or how we are to know that those spatial relations are explanatorily inessential to the phenomena under consideration before the science is actually done.

        • Dan Weiskopf

          Hi Cameron — I don’t think of this as any form of essentialism about explanations, nor as an attempt to establish by fiat what sorts of evidence will be relevant to determining whether a model is an adequate one or not. It may well be that facts about the spatial and structural organization of neural systems will be important to know in order to decide whether a psychological model is correct or not, though I ‘d add that precisely how these constraints operate is much harder to know than most people grant.

          There are two things that I want to maintain. First, that modelers determine the intended domain of application of their models. Michael Weisberg has made much of this point, and I’m largely in agreement with him. So psychological models have cognitive structures their intended target domain, and cognitive structures are things that can be described qua functionally organized sets of properties independently of the neural structures that accompany them.

          Second, though, adequate models of a system are generally constrained by everything else that is true about that system. In some cases that means that two models of a single system can’t straightforwardly contradict one another in what they say about the system. (Even this I would hedge on, though — the models may be idealized ones, in which case contradictions are less problematic.) This is just about the strongest claim about model integration that I’m prepared to endorse, and it is true of neural models as well as cognitive models. It would be nice to see that last fact in particular acknowledged once in a while — why are people so confident that it’s the psychological models that have it wrong when they conflict with a particular neural model, again?

          But again, I think it is far from obvious how to decide in the interfield case when two models are contradicting each other. These models are not sufficiently well-formalized to allow us to make these derivations mathematically, after all, so claims that they do come in conflict are usually underwritten by strong and contested assumptions about exactly how mind/brain models are meant to be integrated. I think it would be better to re-examine those assumptions themselves, particularly when they involve arguably false constraints such as mechanistic localization.

          • Hi Dan,

            Thank you for the clarifications. I agree with everything in your reply, except for a strong interpretation of your first point, that modelers should get to determine the intended domain of application of their models. Granted, it would be silly to think that the modeler’s intentions are wholly irrelevant to a model’s interpretation; but my point is that they rarely (never?) wholly settle a model’s interpretation, because we rarely understand the phenomena we’re trying to explain well enough to articulate its boundary conditions with enough precision to know that spatial and structural constraints will be irrelevant, given unforeseen future discoveries. Sure, modelers can rule out salient alternatives that are already well understood—a macroeconomist can explicitly exclude some range of microeconomic considerations, for example—but psychological modelers are rarely (never?) in the position to articulate the domain of application with enough precision to rule out all possible mechanistic constraints on systems that implement those models.

            While I think this kind of inability to precisely articulate boundary conditions is ubiquitous in psychology, I’ll just give a quick example from a paper I happen to have been reading recently concerning a classic developmental effect observed by Baldwin (1993). (I apologize about the length of the following discussion…) In a series of classic studies, Baldwin found that word learning in even young children seemed sensitive to referential intent. In particular, Baldwin found that when novel objects are placed in buckets on separate sides of a table and an experimenter looks into one of the buckets and speaks the novel label “modi”, then when a child is later asked to pick the “modi” with both objects placed in a transparent bucket, they significantly more often choose the object that had been indicated by referential intent. A variety of models have since been developed that integrated the idea that even young infants read cues for referential intent.

            However, a more recent study by Samuelson et al. (2011) performed several variants on the original Baldwin task, finding that not only was referential intent not necessary for the word-learning effect observed in the original Baldwin experiment, it wasn’t even sufficient, and that in fact spatial consistency appeared to drive the early word learning effect across all the variants (the “referential cues” in the original experiment serving merely to orient the infant’s attention to the shared spatial dimensions of the training situation). Samuelson et al. in turn offered a Dynamic Neural Field model that could reproduce the full range of experimental data they collected (including the original Baldwin finding), and which was more biologically plausible to boot. Crucially, this model involved associations dynamically learned from information about the spatial location of features represented in spatially-organized neural maps. So space in the training situation not only mattered for infant word learning, but the relative locations of representations for objects in spatial fields realized by the brain (purportedly cortical maps) also matter more than we initially supposed.

            Now, I’m not saying that this is how you or a proponent of your view would respond to these findings, but one could respond to these experiments and the new model by deploying a defense based on intended domain of application of the earlier models, for example by arguing that the domain of application was somehow limited to exclude these variants on the task, or idealized or abstracted in some way that makes the spatial situation in which words are learned irrelevant. Indeed, these models are all still adequate on their own terms in the original Baldwin experiment; the functional patterns they predict are reliably found on that initial experimental design (the effect was reproduced by Samuelson et al. on this version of the task). But I take it that these moves would all miss the point of Samuelson et al.’s critique; these additional tasks were neither explicitly sanctioned nor explicitly disallowed by any intention of the earlier modelers. Assuming that the Samuelson findings regarding the importance of space and unimportance of other cues for referential intent are not merely experimental artifacts, I take it that the right verdict is that Baldwin and the modelers who followed him had an incorrect conception of the phenomena they were attempting to explain and as a result offered models that were not counterfactually adequate to capture its actual contours. As a result, they were unaware of some spatial constraints, namely that infants represent the locations of features using spatially-organized maps that allows feature and label binding.

            This is just one example, and no doubt a crafty functionalist modeler could come up with some way to capture the effects across all experiments using a model that itself was agnostic about spatial relations amongst its components’ realizers. But I take the following lessons from the case:
            1. The previous modelers had a less fecund conception of the phenomenon they were trying to explain—in particular, they operated under a less counterfactually powerful functional profile for the phenomenon being modeled than that favored by Samuelson et al.
            2. The new experimental situations highlighted by Samuelson et al. should not be excluded from relevance by the original intent of the modelers. While we should allow the modeler’s intent to exclude some interpretations of these models, they shouldn’t be thought to exclude these additional experimental situations as irrelevant to the models’ evaluation, because they are just as relevant to understanding the psychological processes driving early word learning as the original Baldwin experiment, and,
            3. Samuelson et al.’s revised functional profile, emphasizing spatial associations between features and labels, is more naturally interpreted as placing spatial constraints on systems that could realize models of the capacity (given that the most obvious way to track such spatial associations in a system is to use spatial maps where spatial relations between nodes in a network are isomorphic to spatial relationships in the environment).

            My worry is that this kind of mismatch between an expected functional profile of a phenomenon and its actual contours—what Boyd would call a “modest embarrassment” to accommodation—is so common as to render premature any general argument for the autonomy of cognitive models working from premises about modeler’s intended domain of application. Spatial constraints might matter (and have routinely been found to matter) even across a host of cases where there didn’t initially seem to be any obvious relevance.

            (Just to put all cards on the table, I agree with you that these spatial constraints will be more complex than one might initially suppose. Even in DNF models of V1, when you start dealing with visual processing of composite stimuli, one has to go beyond simple one-to-one mappings between spatial relations in visual space and spatial location in the retinal maps (e.g. see the Jancke et al. below). But this doesn’t show that spatial constraints are irrelevant, only that they are more complex than we might have supposed. My preference for the way forward is to understand these complex spatial constraints, rather than to screen off their relevance through modeler’s intent. If we don’t, I worry, they’ll just keep coming back to haunt us through more modest embarrassments, even at the functional level of explanation.)

            At any rate, I look forward to seeing your work on the many models problem, though unfortunately I can’t make it to PSA…

            Baldwin DA (1993) Early referential understanding: Infants’ ability to recognize
            referential acts for what they are. Dev Psychol 29: 832–843.

            Samuelson, L. K., Smith, L. B., Perry, L. K., & Spencer, J. P. (2011). Grounding word learning in space. PloS one, 6(12), e28095.

            Jancke, D., Erlhagen, W., Dinse, H. R., Akhavan, A. C., Giese, M., Steinhage, A., & Schöner, G. (1999). Parametric population representation of retinal location: Neuronal interaction dynamics in cat primary visual cortex. The Journal of Neuroscience, 19(20), 9016-9028.

      • Once again, you clearly seem to be talking about heuristics, about ways to provide certain kinds of solutions for a system – be it water or the brain – given only certain kinds of resources and information.

        On a heuristic view, both you and Craver are right. You can agree with Gualtiero that it all comes down to neural mechanisms, and yet insist that information regarding neural mechanisms is irrelevant. Neuroscience is in the business of dismantling the lock, whereas psychology is in the business of isolating all the different ways to pick that lock. Neuroscientific findings are relevant but not necessary to psychological lock-picking the way design specs are relevant but not necessary to skilled thieves.

        Since both the neuroscientist and the psychologist are forced to communicate their findings in language, it seems they are both in the same business, that the psychologist must be giving low-dimensional descriptions of the high dimensional facts of the brain. Thus the appeal of the ‘mechanism sketch’ interpretation.

        But the thing to always remember about heuristics is the way they often *require neglect* to function as effective problem-solvers. Visual illusions provide a wonderful example of this: by gaming the way various visual rules-of-thumb are cued – essentially, providing the wrong information in the wrong context – researchers can generate a wild variety of dissonant effects. Adapted to solve in the absence of certain kinds of information, heuristics often require that information remain absent. This is why we should *expect* certain neuroscientific findings will jam the gears of certain psychological findings.

        It really is quite an elegant way to solve a number of problems (far beyond this particular debate, I think). We can get rid of all the spooky ‘function’ talk, while acknowledging that psychology has a genuinely distinct scientific role to play, that it isn’t simply pin-the-tail-on-the-donkey neuroscience.

        Eliminativism without special science tears.

        • Dan Weiskopf

          I approve of the general tenor of this suggestion — though I don’t think of this as a difference in heuristics. ‘Heuristic’ means too many things for me to feel confident about extending it in this way. Instead, I’d just say that cognitive and neural modeling are different enterprises, in their domains of application, methods, and formal and experimental apparatus. If we want to call these model building procedures ‘heuristics’, that’s ok, although they aren’t necessarily shortcuts or approximations in the way that heuristics are normally thought to be.

          I also wouldn’t accept that psychological models are simply low dimensional versions of higher dimensional neural state space models. First, I don’t see why the ‘state space’ of psychology should be defined in neural terms at all. Speaking loosely, the ontology of psychology just defines a space with different dimensions. Second, it’s not even that obvious that psychological models will in the end be lower dimensional. Lots of neural models involve dimensional reduction as well, and psychology as a whole might require a whole lot of room to describe a creature’s total cognitive state.

          So, sometimes you can unify two different models by showing one to be a lower dimensional version of the other. But I don’t think that this will be the form that the unification of mind/brain models ultimately takes.

  5. Gualtiero

    Thanks, Dan Ryder. With respect to constitutive explanation in psychology and neuroscience, the substantive dispute is whether there is a class of psychological models that are distinct and autonomous from models that include information about the concrete components (neural systems) and their functions. My answer is no. There are models that abstract away from various aspects of lower mechanistic levels but they are all more or less sketchy mechanistic models. But all models in psychology have to fit constraints from lower level mechanistic explanations. As you (Dan Ryder) say, if a psychological model posits a component but there are no concrete component(s) in the nervous system corresponding to it, either that model is not explanatory or it’s wrong.

  6. Dyami Hayes

    A thought provoking post! I admit, I have mostly bought into the mechanistic program exemplified by Craver.

    If you wish to limit psychology to mere analysis of (mental) functions, then that can proceed as an autonomous science. But when psychologists err in their models, understanding the mechanisms that realize various functions can help generate novel hypotheses for testing. This interplay with neuroscience isn’t necessary (hence ‘autonomy’) but surely it helps, no?

    I’m not sure if this analogy fits well, but perhaps we could compare this growing convergence between psychology, linguistics, and neuroscience with the New Synthesis (explaining evolution).

    • Dan Weiskopf

      Hi Dyami, the sort of interfield interaction that you mention here is undoubtedly important. That’s why I distinguish methodological autonomy from explanatory autonomy. I don’t have a problem with psychologists using data from anywhere and everywhere to confirm their models. I just want to insist that this doesn’t undermine the explanatory autonomy of a well-confirmed cognitive model, wherever its confirmation comes from.

      On the New Synthesis: I think I’m less optimistic that that is coming together quite as well as had been hoped. But I definitely agree with the idea that heuristics that drive research in psychology can come from anywhere, whether it’s evolutionary theory, neuroscience, etc.

  7. Steve Downes

    Timely piece Dan, I enjoyed it. I am at the early stages of putting together a small book on models and models in psychology are already part of the planned content. I agree with your observation that philosophers of science are all over models in all kinds of way but philosophers of cognitive science barely mention them. I wonder if this is partly because much philosophy of cognitive science is practiced as an offshoot of philosophy of mind rather than of philosophy of science.

    • Dan Weiskopf

      Hi Steve, glad you’re doing the models book. Keep me updated on how it progresses. I have some vague thoughts on why phil psych hasn’t taken up modeling in the way that other branches of phil science have. What you mention may well be a factor. After all, every branch of philosophy of science has this duality: it’s concerned with advancing first-order theses within the broad domain of the science itself, and it’s concerned with methodological and theoretical issues about how the science itself progresses. The parts of phil psych that are closest to phil mind are doing the first thing, but not the second. A puzzling situation, but I hope one that will change gradually.

  8. I have my own original model set out quite simply in my free book on skydrive at Read pages 7 – 10 then proceed if it interests you. I embody the mind entirely to obviate the unknown complexity of neural processing for “human reasoning”. It might actually be quite simple, and modelling is the best approach, even if it falls short of “mechanical reproduction”.
    Marcus Morgan
    Lawyer / Private Research
    Melbourne, Australia

  9. VicP

    Hi Dan, Very interesting topic. I’m an electrical engineer who deals in the world at all of the levels of description from component level, to circuit, to block and verbal description. What comes to mind is that the brain itself is a modeler of reality. Like a person who is blind, he has no problem navigating around the room or his own house. The lack of vision simply makes it a lot more difficult to experience a completely different house however his own house model is adequate and easy to communicate to his guests and visitors. Namely when the internalized model and language are tightly intertwined we feel it is an adequate truth system.

      • VicP

        Had a “feeling” you would like it Scott. All of us live inside of our own comfort zones and as you point out philosophers have had theirs for a few thousand years which is why they find it hard to communicate outside their group.

        What also comes to mind is that the eye is just the part of the brain which can blindly see OUTSIDE of the body but the rest of the brain just as blindly shares similar properties and actually blindly sees inside the body.

        Now only if I can sell them Supercell Theory!

    • Dan Weiskopf

      Hi Vic, I agree completely that there’s a tail-chasing aspect to this whole enterprise of attempting to turn the mind’s model-building capacities on itself. As my previous posts on concepts may have indicated, I tend to see these models as offering perspectives on a domain or category that have a kind of local usefulness — but this doesn’t guarantee that they will cohere with each other when they are forced into close proximity, or made to occupy the same stage.

      • VicP

        Thanks Dan, What makes a Grand Unifying Theory enticing to many is that we don’t know why the models may cohere or be obvious why they don’t cohere on the same stage. Analogously we could build models of all the cardiopulmonary organs and know which aspects directly and indirectly relate to the states and flow of blood as we know that goods, services and monetary valuation get passed around in economic models. What the nervous systems traffic as multitudinous conscious states resulting in overall consciousness is very exciting science.

  10. VicP: “What also comes to mind is that the eye is just the part of the brain which can blindly see OUTSIDE of the body but the rest of the brain just as blindly shares similar properties and actually blindly sees inside the body.”

    This is why the human brain needs the egocentric mechanisms of the retinoid system to give us our phenomenal world. See “Where Am I? Redux”, here:

Comments are closed.

Back to Top