Why Does Simplification Rule Out Explanation?

Commentary from Carrie Figdor on today’s post from Mazviita Chirimuuta on The Brain Abstracted (MIT Press).

The animating idea of Chirimuuta’s book is that science, and neuroscience in particular, must engage in simplification in order to explain a complex world. The epistemic dangers of taking the outcomes of simplification strategies literally lead her to adopt a Kantian-inspired metaphysics that falls between “pure constructivism” and “standard scientific realism” but is much closer to the former. Simplification in computational cognitive neuroscience is her main target. Computational models (artificial neural networks) that seek to explain cognition simplify by (inter alia) distinguishing brain components that process information (neurons) from those that only provide metabolic support (e.g. glia). They also rely on the principle of substrate (or medium) independence inherited from classical computationalism. Chirimuuta argues that these and other simplifications tell against the “dominant” view (p 245) that ANNs are literal representations of brains (a literal interpretation) rather than very partial analogies (her analogical interpretation).

While I’m no fan of substrate-independence and am agnostic about what ANNs tell us about biological cognition, I’m not persuaded Chirimuuta has adequately defended either her metaphysics or her rejection of computational models. For example, she claims brain complexity is “an insurmountable obstacle to there ever being one unified and general theory explaining how the brain gives rise to cognition” (p 29). Why “insurmountable”, and why does the mere fact of complexity have this consequence? Even if abstraction is a departure from the truth, in what way must cognition be “very different” from how it is presented in ANNs (p 275)? A typical free tourist map is a departure from the truth, but the layout of famous landmarks isn’t “very different” from how the map represents them. What is the metric for “very different”? Why not just be cautious about drawing epistemological and metaphysical conclusions from current computational cognitive neuroscience?

To the contrary, Chirimuuta defends a number of striking skeptical claims about computational models in Ch. 9, “Revisiting the Fallacy of Misplaced Concreteness”. The fallacy is “the mistake of taking the abstractions of science for concrete reality, confusing the model with the target, the map with the territory” (p 246). Her strongest claim is that it is nomologically impossible for ANNs to explain the psychological phenomena brains subserve, such as consciousness, general intelligence, meaning, and understanding (“the activity of sense making performed by human beings who ceaselessly act in [and?] among things”) (p 269). Chirimuuta counters two opponents, Schneider and Chalmers, who claim that while current ANNs aren’t conscious or capable of general intelligence or understanding, with additional scaling-up of some sort they could be. Chirimuuta agrees that such scaling-up is logically or conceptually possible, but her interest is nomological possibility. So is mine. What’s not clear is why it is nomologically impossible for additional model parameters (e.g. to account for the role of glia in cognitive processing) to capture what’s essential to consciousness or for these features to be realized non-biologically. Duplication of the brain or of neurons may not be required to explain how the brain produces consciousness or understanding.

Chirimuuta also claims the literal interpretation of ANNs faces a puzzle that her analogical interpretation doesn’t. An ANN can achieve human level performance in core object recognition, but on a literal interpretation it “cannot explain, and has no potential to explain” (p 252), why and how animals have experience when they perform this task. Her analogical view solves the puzzle by saying the missing (non-analogous) bits would explain consciousness. But no one thinks ANNs are in the business of explaining experience (and Chirimuuta acknowledges this). So it’s not an explanatory gap in the model if it doesn’t provide an explanation of something it doesn’t aim to explain, even if one takes the model literally. And her claim that an ANN “has no potential to explain” experience returns us to the issue of nomological impossibility.

Lastly, Chirimuuta affirms biological naturalism roughly a la Searle, whereby consciousness and understanding arise only in biological entities and cannot be realized in nonliving material. It’s disappointing that Chirimuuta, a vision neuroscientist, doesn’t explain how biology matters any more than Searle did. She does suggest that biological naturalism fits well with embodied and embedded cognition (p 274), but these views rely on dynamical systems models that are at least as abstract as ANNs. She also asserts that no artificial entity could be conscious or understand because consciousness and understanding arise from the complexity of the brain. But claiming that those characteristics of organisms that are not shared with computers are responsible for general intelligence and consciousness (p 248) merely identifies what’s missing from current ANNs with what’s needed for consciousness and understanding. One might adopt a weaker view in which ANNs capture something. For example, the Hodgkin-Huxley model of the action potential doesn’t explain everything about neural activity, but it does capture something. (Is what it explains still a “biological feature”?) Which omitted biological features matter for consciousness and understanding, and which don’t, and why? This is the key question for any neuroscientist.

3 Comments

  1. Mazviita

    Carrie raises a lot of substantial objections. I’ll answer each of them in turn, aiming to be brief.
    Why does brain complexity present an insurmountable challenge to there being one unified explanatory theory of the brain? First thing to note is that I explicitly tie explanation to understanding: I am not considering explanatory theories that would not be comprehensible to the best qualified specialist neuroscientists. In Chapter 7 the argument for this comes from consideration of dimensionality reduction that neuroscientists rely on in order to develop explanatory theories. I take it to be a fixed feature of human cognitive capabilities that we cannot make sense of (visualize or otherwise qualitatively track) relationships that need to be depicted in anything over a 10 dimensional space. In the case study of current motor neuroscience, I show how dimensionality reduction reduces neural data down to this manageable set of relationships, but there is reason to think that the dimensionality of task-relevant neuronal processes, especially outside of controlled laboratory conditions, is higher than this. Therefore, the conclusion is that we can have partial, low-dimensional explanatory theories, but not one unified theory that captures all the relevant relationships.

    Even if abstraction is a departure from the truth, in what way must cognition be “very different” from how it is presented in ANNs? Carrie gives the example of a map that is not very different from the actual layout of a town in its depiction of the geographical relationships between tourist attractions (i.e. the relationships that matter), even though departing in other ways from the truth. In the case of the brain vs. ANN there is clear empirical evidence of a high degree of dissimilarity with respect to features of the brain known to be significant for cognition. For example, ANN nodes have no anatomy (they are “point neurons”) while it is clear that structural features of the brain are functionally relevant. Learning in the brain depends not only on strengthening and weakening of synaptic connections (as modelled in the ANN) but many other processes at sub-and supra-cellular scales. ANNs make no attempt to model the biochemistry crucial to brain function (neurotransmitters, neuromodulators).

    Leading on to another objection, Carrie’s point here would be that it is nomologically possible for a more elaborate ANN or other (silicon based) computational architecture to include these additional neuronal and non-neuronal features of the brain. This is doubtful. Actual brain function depends on elaborate communication networks working at multiple levels (not dissimilar from regulatory networks elsewhere in the body). There is no reason to think that with standard electronic components this kind of functional sophistication could be duplicated for seamless integration with neural components in realtime. The growing field of neuro-morphic computing attests to the fact that some engineers recognize this, though their efforts are by no means guaranteed success.

    No one thinks ANNs are in the business of explaining experience. Actually, I don’t agree. What I do say is that current ANNs used to explain specific cognitive processes, such as deep convolutional nets in vision science, are not being touted as explanations of conscious visual experience. But at the same time, amongst deep learning optimists, there is an expectation that this technology will scale up to artificial general intelligence with consciousness. The expectation is that when this occurs, the ANNs could be reverse engineered to yield an explanation of consciousness.

    I am doubtful that this will occur for reasons alluded to in the last paragraph of Carrie’s post, asking for more explanation for why biology matters for consciousness. The argument here is pretty much like the parable of the search for the lost keys only on the patch of ground where the streetlight is shining. The computational framework offers the brightest, sharpest light that researchers have yet had on the question of how ordinary matter can be organized to do cognition-like things. But given that computers are so unlike humans and other animals in their ability to perform great feats of intelligence without a glimmer of sentience, it is reasonable to think that the keys to consciousness are somewhere out there in the dark. Consistent with my observation that cognitive neuroscience largely abstracts away from the biological basis of cognition, my training in visual neuroscience did not in fact prepare me to say much about the best way to begin this search. (Though, on my supervisor’s advice, I did attend the physiology lectures on photo-transduction in the retina, of which my recollection is thinking, ‘that’s very complicated!’). In the final chapter of the book, I do say that there is a role here for philosophical investigations beyond the confines of empirical science precisely because all the insights of embodied and embedded cognition cannot be captured by modelling strategies such as dynamical systems. My final point is that I disagree with Carrie’s claim that the 4E tradition relies on this other form of abstraction. Indeed, I argue in Chapter 10 that it can only be true to its insights if it is willing to be a little more fuzzy than is demanded by quantitative modelling.

  2. Carrie Figdor

    Thanks Mazviita for taking the time to respond. There is indeed a lot to discuss! I am not sure my concerns are assuaged, however. I’m not convinced that “understanding” (whatever that is) should be a criterion for explanation or for one unified explanatory theory — there’s certainly room to argue that it should be a criterion, but it seems to go beyond what we need for neuroscientific explanation. I also still don’t see the argument for nomological impossibility — the complexity of the brain is not at issue, it’s the very strong claim of n-impossibility that is at issue, and I’m not seeing how that very strong claim has been supported. The same might be said regarding why biology matters — the combined question, I suppose, is whether the missing aspects of biology that do matter (whatever they are) can’t be captured in any nomologically possible computational model. I have no doubts that current models are deficient.

  3. Mazviita

    About understanding, I’m drawing (in chapter 2) on the recent literature on scientific understanding (Potochnik, Elgin, deRegt) which emphasises that scientists’ explanatory projects are linked to their pragmatic goals and cognitive limitations. It does not replace the notion of explanation with subjective feelings, but it resists the view that what counts as explanatory (i.e. delivering scientific understanding) can be evaluated independently of these human parameters.
    On nomological impossibility, the basic argument is that models are only models insofar as they leave out details of their target systems. A tenet of computational modelling of the brain is that function can be represented independently of reproducing or otherwise representing (most of) the structural properties of the target. In biology, as a generally observed principle, structure and function are interdependent. E.g. folded proteins have the functions that they do because of the particular structures that they have. You don’t get the function without the structure. This is the case just as much in the brain as elsewhere in the body. This implies that you cannot, except in a very gross way, replicate brain function without replicating something of the structures of the neural tissue implementing those functions. With models implemented in standard electronic hardware you can represent certain structures, such as the synaptic connections modelled in ANNs; with neuromorphic components you can potentially go further. The strong claim for nomological impossibility is that the full suit of cognitive functions found in living organisms could not be replicated in a non-living machine because there are structural properties essential for those functions that pertain to the particular material properties of living cells. I should emphase that the claim for nonomological impossibility of uploading in chapter 9 is that models running in standard digital computers will not be able to reproduce the general intelligence with sentience that has so far only be observed in animals. If you accept the premise about interdependence of structure and function in biology, you should accept this because it is clear that a model in a digital computer can reproduce very few of the structural properties of a biological system. For me it as an open question how far you would get with non-living but neuromorphic hardware. My bet is, not all the way.

Comments are closed.

Back to Top