More on Nagel’s “Mind and Cosmos”

The following analysis was submitted by Brains reader Bruce Mayo, a retired computational linguist with some background in philosophy, in response to Kristina’s much-discussed post. Enjoy!

Now that some of the dust has settled around Thomas Nagels recent book, Mind and Cosmos, it might be time to sort out the praise and complaints. This I’ve been trying to do recently, and Ive come to the conviction that, where they have been critical, many of the comments and reviews I’ve seen misunderstand Nagels central issue; where praising, they have not addressed squarely the objections of the other reviewers. At the same time, readers can hardly be blamed for their misunderstandings because Nagel formulates his central theses in rather vague terms and in the framework of a specifically philosophical vocabulary that can easily throw a non-philosophical reader off the track – as he did me, at first.

The lightning rod that seems to have drawn the most ire is his often repeated reservations about the possibility of explaining the origin of life and of consciousness from inert matter in physicalist terms. Despite his unambiguously declared atheism, this seems to leave the back door open for creationists, mystics, and other opponents of hard science. The real problem is that Nagel, like many scientists, tacitly accepts optimistic expectations concerning what a certain conception of physicalist science really can produce, as I will explain below. However, I’m sure there’s no point to beating this dead horse here.

Nagel’s central claim is that a naturalistic or “reductionist” science that recognizes only efficient causes, in Aristotelian terminology, as explanatory terms, cannot produce explanations of life and consciousness. Nagel’s real subject is the nature of scientific explanation, specifically what he calls naturalistic reductionism. The alternative for which he pleads is something that allows for teleology of some sort, a pantheism anchored in nature. After mulling this over, I’ve persuaded myself that the idea is hardly as far-fetched as it seems to many readers, and that it is in fact fairly close to what natural scientists really do, if not always to what they say they do.

I imagine most people associate the notion of explanatory reduction with things like Newton’s laws of motion and the atomic theory of matter. Conceptually, reduction is the inverse of prediction, and we expect the elementary terms of a reduction to provide correct predictions of the phenomena they explain. Properties or phenomena that are not predicted count, of course, against the validity of a reduction, unless we allow for the possibility of “emergent” properties that cannot or must not be accounted for at the level of the final reduction. One way of introducing non-predictable emergent properties is by appeal to some form of teleology, with the goal or purpose ‘pulling’ the prediction toward an end-point, instead of pushing it blindly in all directions allowed by the combinatorics of the elementary terms. Teleology, the appeal to Aristotelian final causes, raises a red flag for many scientists, although they actually use it quite a lot.

Suppose we take a set of Lego blocks, and the things we can construct with them, as the domain of a very simple science. We can list all possible ways of attaching two blocks together, and then use that list to describe how to attach a third, a fourth, and so on, recursively. Eventually we have descriptions of an immense set of objects – walls, bridges, houses, and of course many objects that have no meaning for us at all. Now suppose, further, that before we start testing this theory of Lego objects, a mischievous daemon introduces an identical, slight, imperceptible curvature in the shape of each block. We will have no knowledge of it because it is so slight as to unmeasurable in individual blocks. The first objects we build will be in accord with the theory that we developed for perfectly rectangular blocks, but as they grow, we may find that many objects can’t be completed because the members become warped in various ways, as a result of an accumulating curvature. On the other hand, we find that some objects, like gently curving walls, can be extended indefinitely, occasionally producing, among other things, cylindrical towers of some large but invariable diameter. This development is not predicted by the theory because the curvature of individual blocks is imperceptible in individual blocks and is not represented in the combining rules of our theory.

Cylindrical tower-forming is, in a simple way, an emergent property of the daemonized Lego block physics. Since the curvature of individual blocks is imperceptible, we cannot be sure of its existence, and we cannot legitimately build it into the ground level rules of combination, but empirically we see that as objects grow larger, the ground level theory breaks down and something else increasingly determines the objects’ shapes. It appears to us that somehow the Lego blocks were meant for building cylindrical towers, but we do not and cannot know exactly what the cause is, although we can postulate that it has to do with some elementary property of the Lego blocks. Blocks builders will say something like “As structures get very large, the blocks seem to want to form coherent curved surfaces; when the direction of curvature remains constant, we get towers.” The attribution of intentionality is, of course, not testable, but also not disprovable, since the underlying, non-intentional theory does not account for it.

The analogy to chemistry is evident and legitimate. There is a level on which the combining properties of atoms can be derived “ab initio” from the underlying quantum-mechanical physics of fields, waves and particles, but this level is severely limited in predictive power, simply because the number of factors that can influence a chemical structure is potentially unlimited, so that predictive accuracy for even simple molecules entails massive amounts of computation. For complex structures chemists have to fall back on empirical crutches, and the mathematical models can even be completely in error. I’m not one myself, but I believe that for organic chemists chemical reality is not the reductive level of wave functions and orbitals but the emergent one of stick models and cookbook teleologies; the quantum models on which chemical modeling software is built have for them the status of interesting tales that can be illuminating and often helpful, but are removed from the reality they work with. From my own experience I know that computational models of electronic circuits present similar difficulties, and engineers will say things like “You can’t explain this circuit’s oscillation from the model – that kind of transistor just wants to oscillate.” A hard-nosed materialist may object, “That’s silly, transistors don’t want anything.” But the engineer will probably shrug and say, “Well, you could introduce terms in the model for as many stray reactances and non-linearities as you like, but it’ll become an unsolvable computational monster, and you can’t measure those values accurately, anyway. So if you want to get any work done, it’s better to think of the transistor as doing everything it can to oscillate, i.e., teleologically.”

The inaccessibility of the bottom level of physicalist reduction is not merely an accident of lame computers and imperfect measuring instruments. This layer is ultimately hidden from us by a veil of computational complexity and measurement uncertainty that – as a matter of principle – we can never, never pierce. This is what complexity theorists have been trying to make clear to us for years: there are classes of surprisingly simple reductionist problems that, to solve, would take more computational power than the entire known universe in its entire age could make available. You can’t predict the weather from the level of quantum physics – not now, not ever. Thus, whether we attribute the emergent properties of living organisms to uncomputable, non-derivable properties of underlying fields and particles or to interfering mischievous daemons is of no consequence, except as the one or the other helps us to understand the emergent properties better. However, if as monistic materialists we proscribe teleological statements about transistors and molecules, then we must do the same with respect to plants seeking light and Samaritans trying to do good. This would confront us with massive problems in adequately understanding both complex physical phenomena and human actions.

Nagel’s jump from emergent properties of matter to consciousness, experience, ethics and “mental substance” may throw off some readers. Consciousness, in the tradition of philosophy of mind, has a specific meaning, going back to Descartes’ arguments that mental knowledge, e.g., the laws of logic and geometry, is prior to knowledge gained by experience because it is not contingent. There is no experiment that could disprove an elementary axiom of logic, nor the content of an elementary experience, like warmness. Although Descartes introduced something like what we now call the mind-matter dichotomy, it is not at all necessary, and it is not a dogma that Nagel accepts. In “What Is It Like to Be a Bat” Nagel allows, “Perhaps anything [e.g., a robot] complex enough to behave like a person would have experiences” (1974, Note 2). In Mind and Cosmos, in line with his materialistic monism, I believe he is merely objecting to the idea that one could analyze the contents of consciousness down to a level of efficient or formal causes. Here Nagel is less interested in consciousness per se, but in the contents of consciousness.

Conscious experience” is, thus, not something non-material; it can be thought of as something like computer software that is able to talk about its internal structures and events, in addition to data that are input to the computer. Accordingly, the languages of subjective conscious experience and of objective description simply refer to different things, and for that reason they are by and large not inter-translatable. That – I would say – is why I can never know what it’s like to be a bat, and nothing further is mysterious or unexplainable about this dualism. Carrying on a conversation with me, a robot may tell me “I feel a temperature error in my right actuator.” I check the actuator; it’s ok. I say, “Your actuator is cold.” The robot insists, “I feel that it’s hot.” The objective value I measure at the actuator is not what the robot is talking about. The robot’s subjective, ‘conscious’ report is predicated on a value transmitted to and available only in some data structure hidden deep within its software. The disagreement between my measurement of the actuator’s temperature and the robot’s ‘feeling’ is simply the result of different points of reference: I am talking about what I see at the actuator or on the data transmission line to the central processor; the robot is talking about an internal representation that may lie many layers of processing removed from the data input. Just knowing the physical construction of the robot (the physicalist reduction), but not having the source code of its software, I can never test independently what is represented in the robot’s ‘mind’ (I have to ask non-specialists to take this on faith). So I must assume that some software error or ‘mental confusion’ is causing an incorrect value to appear in the actuator-hotness data structure, which is what the robot is reporting. Moreover, what is “hot” in the robot could be something like a trip threshold, so that any constructed analogies to my own experience of heat and the robot’s internal representation of actuator temperature could be quite different. Consciousness is unique to each organism or apparatus, Nagel maintains, but this is a result of the uniqueness of the internal structures, not of any non-material, mental ghosts.

Thus, when Nagel writes about “explaining consciousness,” it is not the fact of consciousness itself but the contents of consciousness that he finds hard to explain from a purely reductionist standpoint. Two kinds of explanation for the contents of consciousness can be considered: first, a reductionist one, along the lines of neurobiology; second, a functionalist explanation, along the lines of logic or computer programming.

The reductionist explanation is easy to dismiss. If the contents of consciousness in our robot are made of program code and data, a reductionist explanation of the robot’s consciousness would need to predict these from states of transistors in its computer, since these are the elementary units of its functioning. However, no one, even with specialized tools, can deduce how a large computer program functions from the states of the billions of transistors in the computer. Barriers are imposed not only by computational complexity, but also by structural problems, for example, by the fact that data can be encoded in various unknowable ways, or that results of frequently performed computations may simply be stored in memory rather than being observably computed, leaving no way to identify the underlying rule. Such phenomena are observed in language processing. A mapping from the mind’s neurons to thought can hardly be less complex than that from transistors to software, even if many rough mappings from neural activation areas to experiential states (like correct identification of words) are known; so if the latter is not achievable, the former is even more so. A full reduction of all contents of consciousness to neuronal activity will probably remain forever hidden behind the veil of complexity.

I find the functionalist explanation less easy to dismiss. This assumes that it will be possible to describe the mind’s contents in an abstract formal system like that of logic or some computer programming formalism. The mathematician Alan Turing thought we’d eventually get robots that could answer questions about poetry and ethical truths – but someone would have to program them. Nagel allows for the possibility of recognizably conscious robots (as cited above), but at the same time he doubts that we could simulate important things that make up the contents of consciousness in a living being. I’m not sure I understand his position, but there appear to be two kinds of problem. One concerns the bat problem: the unavoidable incomparability between structures that would exist in the robot and structures that we have available in our consciousness (since every individual’s experience is different). The other concerns simulating our ability to discover and identify truth. If we code true propositions into the software simulation of a mind, we have merely copied information from one place to another and have explained nothing. If we try to use learning algorithms (or statistical analysis) to let the robot independently obtain ‘conscious’ knowledge of things like geometry and ethics, we must know beforehand what is to be learned. This is the argument from “poverty of stimulus” that linguists invoke for the presence in the mind of a universal grammar (which should probably called a language learning program). The things to be learned, must of course, have real existence in the philosophical sense: the evolution of birds could not learn how to make aerodynamically optimal wings if there were no laws of aerodynamics, nor could we learn the laws of physics if they had no independent existence. Likewise, so might Nagel argue (though he doesn’t make this very explicit), if there were no objectively given principles of ethics, it would be impossible for a learning algorithm to converge on them; yet we seem to observe some degree of convergence in all societies and in the course of human history.

It apppears that Nagel sees a sort of Lego-block teleology as present in the material which has furnished the basis for evolution on our planet, and sees this teleology as indeed having produced learning or discovery algorithms in humans that can converge on objective truths of certain kinds. Simply saying, as would the opponents of teleology, that by means of some forever incalculable somehow these algorithms appeared by chance in the course of natural selection simply begs the question, and in any case is no more informative than their attribution to a benevolent (or mischievous) daemon or author of intelligent design.

In sum, I think Nagel’s book is a worthwhile contribution to the theory of knowledge and science, but I do wish Nagel had been more explicit and precise. His ultimate speculation, “a cosmic predisposition to the formation of life, consciousness, and the value that is inseparable from them” is just that, and I have to agree that strongly taken positions for or against it can only be called ideology. Curiously, Nagel dismisses out of hand the explanatory content of alternative universes (Note 9, Chapt. 4). But there are some peculiar features in the construction of our universe. The physical constants, like gravitational constant, mass of the proton, quantum of energy, all seem to be arbitrary numbers drawn out of a hat, with no pattern. There is just one element, carbon, that easily forms the backbone of large, complex molecular structures, and just one molecule, water, that easily forms complex gels. Change any of the constants just slightly, and calculations seem to show that all of this goes away, and – so I have read; I am not a cosmologist – nothing with the same potential appears to replaces it. Strange?

0

2 Comments

  1. The whole point of science in the last several hundred years has been
    to banish teleology from our explanations of the world. We use it
    colloquially, as a shorthand, to cover for, as you say, bad measurment
    or insufficient computation. The point is, however, that the universe
    itself, as it clanks along moment to moment, has no use for it as it
    decides what to do next. Only we, with our feeble brains and our
    blurry eyes, use it to describe that universe.

    The question is, does the universe have to finally resort to teleology in
    order to pull off life and/or consciousness? Or are they, too,
    some kind of “emergent” property of blind, stupid billiard balls?

    0
  2. Pingback: Philosophers’ Carnival #154 | Nick Byrd

Comments are closed.