My current project aims to establish that if there is a profound, principled division between mind and world, then it is only occurrent states that are within the mind. But my discussion in these posts is confined to a hypothesis I plan to use in establishing that thesis, namely:
(H) If there is a profound, principled contrast between the mental and the non-mental, then this contrast is explained by (or consists in) the special features of occurrent states. (And if the mental/non-mental contrast is superficial or unprincipled, the impression that there is a profound, principled contrast between these is explained by the special features that occurrent states appear to have.)
As outlined in the previous posts (here & here), I intend to support this hypothesis by considering, in turn, the characteristics that are regarded, by a substantial segment of philosophers, as (i) unique to mentality, and (ii) significant enough to ground a non-trivial contrast between what is within the mind and what lies outside it. (See post 1 for the tentative list of characteristics.)
In this post I sketch, in very broad stroke, my argument for (H) as concerns the second characteristic on the list, Privacy. A state is private iff it can be known via a method uniquely available to the bearer(s) of the state. So the point I’m arguing for is this:
[Privacy] If mentality differs from the non-mental profoundly and in principle, in virtue of the fact that only mental states are private, then it is exclusively occurrent states that are private.
[Privacy] If dispositional states are private, then even if mental states are exclusively private this does not constitute a profound, principled distinction between the mental and the non-mental.
To keep things manageable, I focus on dispositional beliefs in particular. My argument relies on a variant of Clark and Chalmers’ “extended mind” scenario (Clark and Chalmers 1998). In their familiar example, Otto, an Alzheimer’s sufferer, stores information in a notebook. This information plays broadly the same role, in Otto’s cognitive economy, as information stored as “internal” beliefs plays in the cognitive economy of an ordinary person. Specifically, the information is consistently available to guide behavior; easily accessed; automatically endorsed (by Otto) when accessed; and present in the notebook because Otto previously endorsed it (ibid., 17). Clark and Chalmers conclude that the information in the notebook partly constitutes Otto’s dispositional beliefs.
My variant on this scenario is prompted by two concerns about the Otto example. The first is that the role of the notebook records, in Otto’s cognitive economy, appears to differ significantly from the role of ordinary memory stores in an ordinary person’s cognitive economy. In particular: the information in the notebook isn’t automatically revised or updated in reaction to Otto’s learning new information (Weiskopf 2008).
The second concern about the Otto example is my own. The information in the notebook is not available to guide behavior unless Otto retrieves it. And retrieving this information involves an occurrent belief. E.g., the information about the location of the MoMA will not affect Otto’s behavior unless he reads the notebook, and occurrently thinks “The MoMA is on 53rd St.”. So the occurrent endorsement of the stored content intervenes between the dispositional belief and Otto’s behavior. Moreover, the occurrent thought is arguably better suited to contribute to an intentional explanation of Otto’s behavior. For if Otto misread the “3” as an “8”, he would head towards 58th St, an action rationalized by his occurrent thought, but not by his dispositional belief. This raises doubts about whether the extended attitudes, partly constituted by the notebook records, are genuinely active, as the extended mind view (aka “active externalism”) claims.
Neither of these concerns threatens the spirit of the extended mind argument, however. For they both stem from facts about how notebooks—which are very low-tech devices—operate. A more sophisticated platform will be a more plausible candidate for extending the mind.
So I propose to engage in a bit of science fiction. The action takes place in the future, when we know much more about the brain—specifically, when we have learned to identify thoughts through brain imaging (or perhaps some newfangled type of scan yet to be devised).
Otto is an accomplished neuroscientist, who tinkers with computer programming in his spare time. In middle age, he learns that he will likely develop Alzheimer’s disease. (Remarkably, the disease has not yet been eradicated.) In preparation for living with the disease, he arranges a complex system of information storage and retrieval, involving a powerful computer kept in his house. The computer is linked with Otto’s organic brain through a stable satellite connection that enables a free flow of information between them. This arrangement avoids the two concerns about the notebook example. Otto programs the computer, and links it to his (organic) brain, in ways designed to ensure that his total information set is continuously updated and revised. And Otto needn’t deliberately consult the computer for new information. The computer continuously scans Otto’s brain (thus identifying his thoughts and internally-stored attitudes), and when it detects that Otto needs some information, such as the location of the museum, it makes that information available to him. This may involve causing an occurrent belief about the museum’s location; but it may consist simply in guiding him towards 53rd St. when it detects that he has set out for the museum.
Otto loads the computer with static information such as geographical facts and his birthdate, as well as dynamic information such as the current weather forecast and his plans for the day. But when setting up this system Otto realizes that, as his disease progresses, he may become a poor judge of which bits of information are worth remembering. So he programs the computer to detect new information that appears in his organic brain and to store this information on its own, without his deliberately entering it. This marks another improvement on the notebook example, since most ordinary beliefs and intentions are stored as standing attitudes without any deliberate action on our part.
Now suppose The MoMA is on 53rd St. is stored in the computer, and that this partly constitutes Otto’s dispositional belief to that effect. Does Otto have a special method for knowing that he believes this?
Otto does seem to have a method of knowing this that only he can use. Namely, he can retrieve this information directly. For example, if Otto explicitly considers the question “Where is the MoMA?”, the computer will detect this process and make available the needed information, in the form of an occurrent thought. His ensuing thought, “that’s right, the MoMA is on 53rd St!” doesn’t constitute knowledge that he has a dispositional belief to that effect. But it may provide for knowledge. Otto could infer, from the fact that this occurred to him, that he had the dispositional belief; or perhaps the occurrence of that thought warrants the belief “I believe that the MoMA is on 53rd St.”, by reliably indicating the presence of the dispositional belief. (Different accounts of self-knowledge will construe these details differently.)
Other people can recognize that Otto has this belief only by using less direct means. They must ask Otto, or find this information in the computer, and note that the computer is appropriately linked to Otto’s organic brain.
In fact, it seems that the only exclusively first-personal method for knowing dispositional attitudes rests on their relation to occurrent attitudes. This is not to say that knowing a dispositional belief always involves entertaining the corresponding occurrent thought or attitude. For suppose that Otto finds himself heading to 53rd St. He might retrieve his stored plan for the day—viz., the standing intention to visit the MoMA that day—by thinking to himself “now where was I going?” The computer would then prompt the thought “that’s right, to the MoMA!” From this information, and the observation that he was headed towards 53rd St., he could infer that he believes that the MoMA is on 53rd St. And he could thereby achieve knowledge of his dispositional belief without relying on the corresponding occurrent thought. (Admittedly, this would be an odd route to self-knowledge.)
This suggests that, if privacy is a feature of dispositional attitudes, it is a feature they possess only derivatively, in virtue of their relation to occurrent thoughts or attitudes. Specifically, my possession of an exclusively first-personal method of grasping my dispositional attitudes consists in two facts. (i) Only I am in a position to directly “activate” my dispositional attitudes, as it were—that is, to bring it about that my dispositional attitudes are manifested in occurrent thoughts (or in behavior, etc. linked to occurrent thoughts, as in the odd case just mentioned). (ii) My occurrent thoughts are private: I have an exclusively first-personal method of recognizing them.
Note that Otto’s dispositional attitudes would be derivatively private, in this way, even if he expanded his external information stores indefinitely. Suppose that he links up his powerful computer to a large dynamic internet database, which is continually updated and revised. He then programs the computer to retrieve information from the database when needed. (And perhaps also to add to the database, when appropriate: e.g., when Otto makes a new discovery that might be useful to others who use the database.) Arguably, in this scenario Otto’s dispositional beliefs extend to the database. And they are derivatively private: if Otto explicitly considers the question “Where is the MoMA?”, the computer will detect this process, retrieve the information from the database, and make it available in the form of an occurrent thought.
Some might resist the claim that Otto’s dispositional beliefs extend to the database, on the grounds that this scenario doesn’t satisfy Clark and Chalmers’ fourth condition: that information in the external device is present because of a prior endorsement by Otto. However, Clark and Chalmers doubt that that condition is necessary for dispositional beliefs, and for good reason. More to the point, the prior endorsement condition is a backward-looking condition, and is therefore irrelevant to the issue of privacy, which is exclusively forward-looking.
In this scenario Otto does seem to have an exclusively first-personal method of knowing his dispositional beliefs: the information stored in the database partly constitutes Otto’s dispositional beliefs, which are private. But then privacy doesn’t underwrite a profound distinction between the mental and the non-mental. For those of his beliefs that extend to the database are not on the “mental” side of a profound distinction between the mental and the non-mental. (I take it that this is fairly intuitive; I don’t have space to discuss it here.)
Of course, we are not in Otto’s situation. So perhaps privacy does underwrite a profound distinction between actual mental states and everything else. I think the Otto case provides reason to doubt that claim, but put that aside. The fact that Otto’s situation is a possible one means that privacy is not the basis for a mental/non-mental distinction that is both profound and principled. Even if privacy marks a profound difference between our mental states and non-mental states (which I doubt), this difference stems from contingent features of our current situation. So it is not a principled difference.