On Individuating LOT Symbols


Schneider, Susan (unpublished). “The Nature of Symbols in the Language of Thought.”

Schneider addresses the important problem of how to individuate LOT symbols. There is no accepted or even fully worked out solution to this problem in the literature. She gives various interesting and original arguments to the effect that LOT symbols are individuated by “total” computational role. In her view, computational role is found by Ramsifying over narrow cognitive science laws. Finally, she gives some interesting responses to the objection that if symbols are individuated holistically by their total computational role, then symbols cannot be shared. One of her responses is that cognitive science also has broad intentional laws, which do not range over symbols but over broad contents, which are publicly shared. So at least those laws apply to all subjects even though symbols aren’t shared.

Although I am sympathetic to a lot of what she says, I have some concerns.

Concern 1. Her proposal requires a non-semantic notion of symbol, but her non-semantic notion of symbol doesn’t seem to be well grounded. (Many authors have argued that nothing can be a symbol in the relevant sense without being individuated at least in part by semantic properties.) At the beginning, she appeals to Haugeland’s account of computation in terms of automatic formal systems. Unfortunately, the only clear and rigorous explication of the notion of formal systems that we possess is in terms of computation, so Haugeland’s account is circular. I think referring to Haugeland’s work in this context is unhelpful. Later, she gives her own account in terms of Ramsification over narrow cognitive science laws. But this raises the question of how these narrow cognitive science laws are to be discovered, which becomes all the more pressing in light of her stated view, later in the paper, that there are two sets of cognitive science laws: the narrow ones and the broad ones (ranging over broad contents). If ordinary cognitive science laws range over broad contents, how are we to discover the narrow ones? By doing neuroscience? (At some point, Schneider briefly mentions the “neural code,” something that in my understanding of these things, is not related to her issue.) Without at least a sketch of an account of how the narrow laws are to be found, I am unclear on how this proposal is supposed to work.

I think Concern 1 might be addressed by appealing to the non-semantic notion of computation that I have developed in some recent papers (forthcoming in Phil Studies and forthcoming in Australasian J. Phil).

Concern 2: Ramsification is popular among philosophers of mind but it is only a formal maneuver. It this view is going to have real bite as philosophy of cognitive science, Ramsification should be fleshed out in terms of some individuative strategy that actually plays a role in science.

I think Concern 2 might be addressed by appealing to functional explanation, or even better, mechanistic explanation. This is the way actual cognitive scientists go about individuating their explanantia. Schneider should be sympathetic to this move, since she appeals to functional explanation later in the paper. Notice that an appeal to mechanistic explanation is already part of my account of computational individuation, so that both Concerns 1 and 2 can be addressed by appealing to my account. The crucial observation, which is missing from her paper, is that symbols are components (or states of a component) of a computing mechanism. If you have a mechanistic explanation of a system, you thereby have access to individuation conditions for its components, including symbols (in the case of computing mechanisms).

Concern 3: Pending a resolution of Concerns 1 and 2, I would like to know more about what Schneider means by “total” computational role and especially, how it is possible to test hypotheses on whether something is a “total” computational role. If it includes all possible input conditions and internal states, it seems that total computational role can never be discovered. For how can we be sure that we have all the relevant data? Do we have to test the system under all relevant conditions? Is this even possible? Is it possible to know that we have succeeded?

I think Concern 3 might be addressed by appealing, once again, to functional explanation or better, mechanistic explanation. For as Schneider points out in various places in her own paper, mechanistic explanation gives you a way to individuate components and their activities (including, I say, symbols). Furthermore, in order to find a mechanistic explanation, you don’t need to study all possible computations. You can proceed piecemeal, component by component, operation by operation.

When you do the mechanistic explanation of a computing mechanism, you discover that total computational role supervenes on (what may be called) primitive computational role plus input and internal state conditions. So all you need to individuate a symbol is its primitive computational role, i.e. the way the symbol affects the computational architecture (components, their primitive computational operations, and their organization). So pending further explication of what Schneider means by “total”, in order to individuate the symbols, as far as I can tell you don’t need “total” computational role. (Notice that in her paper Schneider already states individuation conditions similar to the ones I suggest, under T2 and T4, but she immediately shifts from those to “total” computational role.) I think individuation in terms of primitive computational role would generate a notion of symbol that can be shared between subjects, provided that subjects share their basic computational architecture.


One comment

  1. Susan Schneider

    Thanks very much to Gualtiero Piccinini for his intriguing comments on my paper. I am very glad to have such useful feedback on a project in progress. In what follows I reply to his three main concerns.

    Concern (1). This concern actually appears to be a number of distinct concerns. I should mention at the outset that the first argument of the paper, the argument that claimed that individuation by total computational role is built into classicism, had in mind a similar notion of a symbol as is discussed in Piccinini’s *Phil Studies* paper (however, his account appeals more directly to computability theory).

    In light of our common interest in this view, I was surprised by the complaint that the notion of symbol is not “well-grounded.” As I don’t think a symbol should be individuated by its semantic features, I am not terribly concerned with the objection that my notion of a symbol is not ‘well-grounded’. For one thing, a proponent of LOT appeals to LOT to naturalize intentionality, and if one tries to take intentionality as being a physicalistically specifiable relation between a symbol and some entity in the world, then it had better be the case that the symbol itself is individuated non-semantically. Otherwise, the attempt to naturalize intentionality would be circular, appealing to a notion of a symbol which is semantic and thus intentional. For another, it is desirable to allow internal duplicates to have the same symbols yet have different interpretations (as in Twin Earth). This captures the sense in which the twins are psychologically the same – i.e., their inner psychology is the same – while also capturing the externalist intuition that the meanings differ.

    Next, Piccinini expresses the concern that Haugeland’s account is circular, but I take it that by this he means that Haugeland’s definition of computation is circular. But I am trying to define the notion of a symbol, not the notion of a computation. I only raise Haugeland’s discussion of symbols because his *AI: the Very Idea* contains a clear and influential articulation of the notion of a classical token. (I find his example of the chess game useful, especially considering that many readers may not know how to program). However, one could talk about how primitive symbols work in a programming language. It makes little sense to those who program to talk of two symbols of the same type which differ in their causal role in the program.

    To turn to the third issue, as Piccinini notes, I give an account of the individuation of symbols in terms of Ramsification over narrow laws. He asks: how are these laws to be discovered? As I say in the paper, if the LOT thesis is correct then presumably such laws will be algorithms identified by a completed cognitive science. If there’s a LOT, then it is the language of what Fodor calls “the central systems” – domain general systems which are informationally unencapsulated, unlike “modules” in Fodor’s sense of the term. On my view, the algorithms discovered by cognitive science that describe these “central” processes are the laws which enter into the Ramsification. (My theory of symbols is a version of a posteriori functionalism). (Elsewhere I’ve argued for the view that the computationalism will succeed in explaining the central systems, contra Fodor’s concerns. See my, “Yes, It Does: a Diatribe on Fodor’s The Mind Doesn’t Work that Way”, available at my website, under “work in progress”).

    Re: Piccinini’s comments on broad and narrow laws, I do not believe that I wrote that broad laws are the major focus of cognitive science. Instead, they are generalizations that identify phenomena that require computational explanation by cognitive science. Consider, e.g., the moon illusion, which abstracts away from computational details, stating a phenomenon which needs to be filled in by a concrete computational account, which proceeds by functional decomposition. Functional decomposition, rather than laws, is the major focus of much of cognitive science.

    In addition to discussing broad laws, the paper discussed three kinds of narrow laws (i.e., laws sensitive to symbols). As the paper mentioned, these are discovered by various fields of cognitive science. (I listed examples from three research areas: social cognition, concepts, and attention).

    Concern (2): Most cognitive scientists may not have heard of Ramsification, but it is in fact a common practice in science to classify types by the causal role of the tokens (with respect to entities at the relevant scientific level). And as discussed above, taxonomizing symbols by computational role is an individuative strategy in computer science. There is a further way that science informs the proposed individuation condition: if the Ramsification is based on a conjunction of laws discovered by cognitive science, then the Ramsification does in fact draw from cognitive science.

    Piccinini then writes: “I think Concern 2 might be addressed by appealing to functional explanation, or even better, mechanistic explanation. This is the way actual cognitive scientists go about individuating their explanantia. Schneider should be sympathetic to this move, since she appeals to functional explanation later in the paper. Notice that an appeal to mechanistic explanation is already part of my account of computational individuation, so that both Concerns 1 and 2 can be addressed by appealing to my account. The crucial observation, which is missing from her paper, is that symbols are components (or states of a component) of a computing mechanism. If you have a mechanistic explanation of a system, you thereby have access to individuation conditions for its components, including symbols (in the case of computing mechanisms).”

    Reply: Actual work in cognitive science that aims to provide functional decompositions of cognitive capacities, does not, as far as I can tell, go as far as advancing particular manners of individuating the cognitive states that the system processes (as discussed in the paper). That is, outside of computer science, I don’t see these individuative practices built into actual functional decompositions. It would be interesting if Piccinini has found something. But I agree that, in principle, if one has the information about the algorithms that characterize a system/subsystem, the states can be taxonomized by their role in the algorithms of the relevant system/subsystem.

    Reply to concern 3: Total computational role is cashed out in terms of Ramsification of *all* the laws involving the given symbol. Now, it could be that only the laws relating symbols to other symbols are needed (and not laws linking symbols to inputs or outputs). For instance, perhaps symbols can be individuated by the role they play in the central system, or even some part of the central system, such as long-term memory. I’m undecided about these issues; but notice that the condition I gave assumed the system was the entire cognitive and perceptual system. I was trying to defend the worst case scenario for mental state holism and show that psychology is nonetheless possible. So let us assume (with the paper) that Ramsification includes the laws relating symbols to inputs and outputs. Piccinini mentions the epistemic worry: “how can we be sure that we have all the relevant data?” This concern doesn’t worry me – that is, it doesn’t worry me any more than it does for the case of say, microphysics — for it seems to be just the usual problem with the discovery of laws in science: how do we really know that we have a sufficient range and number of tests to discover the laws?

    Piccinini suspects that this concern can be addressed by appealing to mechanistic explanation. He writes: “When you do the mechanistic explanation of a computing mechanism, you discover that total computational role supervenes on (what may be called) primitive computational role plus input and internal state conditions. So all you need to individuate a symbol is its prim
    itive computational role, i.e. the way the symbol affects the computational architecture (components, their primitive computational operations, and their organization).”

    Reply: I would be curious how such an account would go when the relevant system is a brain. What “primitive computational role” would be in play here? A natural move is to look to neuroscience. I have been canvassing various bits of research that might be of interest to concept or symbol individuation. Unfortunately, there’s no clear manner of individuating brain states in the central systems. The case of “grandmother” cells has of course occurred to me, but surprisingly, this research ultimately says little about the individuation of cognitive states.

    Thanks again to Gualtiero for his thought provoking remarks.

Comments are closed.

Back to Top