Privacy and dispositional beliefs

My current project aims to establish that if there is a profound, principled division between mind and world, then it is only occurrent states that are within the mind. But my discussion in these posts is confined to a hypothesis I plan to use in establishing that thesis, namely:

(H) If there is a profound, principled contrast between the mental and the non-mental, then this contrast is explained by (or consists in) the special features of occurrent states. (And if the mental/non-mental contrast is superficial or unprincipled, the impression that there is a profound, principled contrast between these is explained by the special features that occurrent states appear to have.)

As outlined in the previous posts (here & here), I intend to support this hypothesis by considering, in turn, the characteristics that are regarded, by a substantial segment of philosophers, as (i) unique to mentality, and (ii) significant enough to ground a non-trivial contrast between what is within the mind and what lies outside it. (See post 1 for the tentative list of characteristics.)

In this post I sketch, in very broad stroke, my argument for (H) as concerns the second characteristic on the list, Privacy. A state is private iff it can be known via a method uniquely available to the bearer(s) of the state. So the point I’m arguing for is this:

[Privacy] If mentality differs from the non-mental profoundly and in principle, in virtue of the fact that only mental states are private, then it is exclusively occurrent states that are private.

Or equivalently:

[Privacy] If dispositional states are private, then even if mental states are exclusively private this does not constitute a profound, principled distinction between the mental and the non-mental.

To keep things manageable, I focus on dispositional beliefs in particular. My argument relies on a variant of Clark and Chalmers’ “extended mind” scenario (Clark and Chalmers 1998). In their familiar example, Otto, an Alzheimer’s sufferer, stores information in a notebook. This information plays broadly the same role, in Otto’s cognitive economy, as information stored as “internal” beliefs plays in the cognitive economy of an ordinary person. Specifically, the information is consistently available to guide behavior; easily accessed; automatically endorsed (by Otto) when accessed; and present in the notebook because Otto previously endorsed it (ibid., 17). Clark and Chalmers conclude that the information in the notebook partly constitutes Otto’s dispositional beliefs.

My variant on this scenario is prompted by two concerns about the Otto example. The first is that the role of the notebook records, in Otto’s cognitive economy, appears to differ significantly from the role of ordinary memory stores in an ordinary person’s cognitive economy. In particular: the information in the notebook isn’t automatically revised or updated in reaction to Otto’s learning new information (Weiskopf 2008).

The second concern about the Otto example is my own. The information in the notebook is not available to guide behavior unless Otto retrieves it. And retrieving this information involves an occurrent belief. E.g., the information about the location of the MoMA will not affect Otto’s behavior unless he reads the notebook, and occurrently thinks “The MoMA is on 53rd St.”. So the occurrent endorsement of the stored content intervenes between the dispositional belief and Otto’s behavior. Moreover, the occurrent thought is arguably better suited to contribute to an intentional explanation of Otto’s behavior. For if Otto misread the “3” as an “8”, he would head towards 58th St, an action rationalized by his occurrent thought, but not by his dispositional belief. This raises doubts about whether the extended attitudes, partly constituted by the notebook records, are genuinely active, as the extended mind view (aka “active externalism”) claims.

Neither of these concerns threatens the spirit of the extended mind argument, however. For they both stem from facts about how notebooks—which are very low-tech devices—operate. A more sophisticated platform will be a more plausible candidate for extending the mind.

So I propose to engage in a bit of science fiction. The action takes place in the future, when we know much more about the brain—specifically, when we have learned to identify thoughts through brain imaging (or perhaps some newfangled type of scan yet to be devised).

Otto is an accomplished neuroscientist, who tinkers with computer programming in his spare time. In middle age, he learns that he will likely develop Alzheimer’s disease. (Remarkably, the disease has not yet been eradicated.) In preparation for living with the disease, he arranges a complex system of information storage and retrieval, involving a powerful computer kept in his house. The computer is linked with Otto’s organic brain through a stable satellite connection that enables a free flow of information between them. This arrangement avoids the two concerns about the notebook example. Otto programs the computer, and links it to his (organic) brain, in ways designed to ensure that his total information set is continuously updated and revised. And Otto needn’t deliberately consult the computer for new information. The computer continuously scans Otto’s brain (thus identifying his thoughts and internally-stored attitudes), and when it detects that Otto needs some information, such as the location of the museum, it makes that information available to him. This may involve causing an occurrent belief about the museum’s location; but it may consist simply in guiding him towards 53rd St. when it detects that he has set out for the museum.

Otto loads the computer with static information such as geographical facts and his birthdate, as well as dynamic information such as the current weather forecast and his plans for the day. But when setting up this system Otto realizes that, as his disease progresses, he may become a poor judge of which bits of information are worth remembering. So he programs the computer to detect new information that appears in his organic brain and to store this information on its own, without his deliberately entering it. This marks another improvement on the notebook example, since most ordinary beliefs and intentions are stored as standing attitudes without any deliberate action on our part.

Now suppose The MoMA is on 53rd St. is stored in the computer, and that this partly constitutes Otto’s dispositional belief to that effect. Does Otto have a special method for knowing that he believes this?

Otto does seem to have a method of knowing this that only he can use. Namely, he can retrieve this information directly. For example, if Otto explicitly considers the question “Where is the MoMA?”, the computer will detect this process and make available the needed information, in the form of an occurrent thought. His ensuing thought, “that’s right, the MoMA is on 53rd St!” doesn’t constitute knowledge that he has a dispositional belief to that effect. But it may provide for knowledge. Otto could infer, from the fact that this occurred to him, that he had the dispositional belief; or perhaps the occurrence of that thought warrants the belief “I believe that the MoMA is on 53rd St.”, by reliably indicating the presence of the dispositional belief. (Different accounts of self-knowledge will construe these details differently.)

Other people can recognize that Otto has this belief only by using less direct means. They must ask Otto, or find this information in the computer, and note that the computer is appropriately linked to Otto’s organic brain.

In fact, it seems that the only exclusively first-personal method for knowing dispositional attitudes rests on their relation to occurrent attitudes. This is not to say that knowing a dispositional belief always involves entertaining the corresponding occurrent thought or attitude. For suppose that Otto finds himself heading to 53rd St. He might retrieve his stored plan for the day—viz., the standing intention to visit the MoMA that day—by thinking to himself “now where was I going?” The computer would then prompt the thought “that’s right, to the MoMA!” From this information, and the observation that he was headed towards 53rd St., he could infer that he believes that the MoMA is on 53rd St. And he could thereby achieve knowledge of his dispositional belief without relying on the corresponding occurrent thought. (Admittedly, this would be an odd route to self-knowledge.)

This suggests that, if privacy is a feature of dispositional attitudes, it is a feature they possess only derivatively, in virtue of their relation to occurrent thoughts or attitudes. Specifically, my possession of an exclusively first-personal method of grasping my dispositional attitudes consists in two facts. (i) Only I am in a position to directly “activate” my dispositional attitudes, as it were—that is, to bring it about that my dispositional attitudes are manifested in occurrent thoughts (or in behavior, etc. linked to occurrent thoughts, as in the odd case just mentioned). (ii) My occurrent thoughts are private: I have an exclusively first-personal method of recognizing them.

Note that Otto’s dispositional attitudes would be derivatively private, in this way, even if he expanded his external information stores indefinitely. Suppose that he links up his powerful computer to a large dynamic internet database, which is continually updated and revised. He then programs the computer to retrieve information from the database when needed. (And perhaps also to add to the database, when appropriate: e.g., when Otto makes a new discovery that might be useful to others who use the database.) Arguably, in this scenario Otto’s dispositional beliefs extend to the database. And they are derivatively private: if Otto explicitly considers the question “Where is the MoMA?”, the computer will detect this process, retrieve the information from the database, and make it available in the form of an occurrent thought.

Some might resist the claim that Otto’s dispositional beliefs extend to the database, on the grounds that this scenario doesn’t satisfy Clark and Chalmers’ fourth condition: that information in the external device is present because of a prior endorsement by Otto. However, Clark and Chalmers doubt that that condition is necessary for dispositional beliefs, and for good reason. More to the point, the prior endorsement condition is a backward-looking condition, and is therefore irrelevant to the issue of privacy, which is exclusively forward-looking.

In this scenario Otto does seem to have an exclusively first-personal method of knowing his dispositional beliefs: the information stored in the database partly constitutes Otto’s dispositional beliefs, which are private. But then privacy doesn’t underwrite a profound distinction between the mental and the non-mental. For those of his beliefs that extend to the database are not on the “mental” side of a profound distinction between the mental and the non-mental. (I take it that this is fairly intuitive; I don’t have space to discuss it here.)

Of course, we are not in Otto’s situation. So perhaps privacy does underwrite a profound distinction between actual mental states and everything else. I think the Otto case provides reason to doubt that claim, but put that aside. The fact that Otto’s situation is a possible one means that privacy is not the basis for a mental/non-mental distinction that is both profound and principled. Even if privacy marks a profound difference between our mental states and non-mental states (which I doubt), this difference stems from contingent features of our current situation. So it is not a principled difference.

7 Comments

  1. Steven

    Hi Brie,

    Thank you for your posts on this interesting topic. I wanted to ask if you could elaborate a little on your claim that Otto’s beliefs in the database don’t fall under the mental side of the distinction between the mind and the non-mental (or merely physical). Perhaps it is jarring for me to consider that Otto does have beliefs, even in this extended and dispositional sense, that are not mental. (Though, if I understand you correctly, the main aim of the project is to make the case against the view that non-occurrent states like these are parts of our minds.) Is the worry here that there will be altogether too much mentality in the world if we endorse some version of the extended mind hypothesis and accept that privacy is the mark of the mental?

    • Brie Gertler

      Hi Steven, thanks for your question. You say:

      I wanted to ask if you could elaborate a little on your claim that Otto’s beliefs in the database don’t fall under the mental side of the distinction between the mind and the non-mental (or merely physical).

      This isn’t quite right. What I said is only that, if his dispositional beliefs extend to the database, then they don’t fall on the “mental” side of a profound mental/non-mental divide. In other words: if they do fall on the “mental” side of a mental/non-mental divide, that divide is not profound.

      I hope that helps!

  2. Brie,

    I agree with most of this. But I’d like to pursue the point I raised after your previous post. Specifically, I’d like to ask why you put things in terms of occurrent states rather than just conscious states. Because it seems to me as if conscious states are what you really have in mind (no pun intended!). You claim that “My occurrent thoughts are private: I have an exclusively first-personal method of recognizing them.” But isn’t this true of only conscious thoughts?

    I’m not sure whether you think there can be non-conscious occurrent states. Suppose you do. Then, it seems to me, your claim is false: for it’s only one’s conscious occurrent states to which one has exclusively first-personal knowledge. On the other hand, if you think that all occurrent states are conscious, then — since presumably occurrent states will then just be the conscious states (I assume we agree that all conscious states are occurrent) — why not just put your arguments in terms of conscious states?

    • Brie Gertler

      Hi Gary. You say:

      You claim that “My occurrent thoughts are private: I have an exclusively first-personal method of recognizing them.” But isn’t this true of only conscious thoughts?

      In the post, I sketched an exclusively first-personal method of recognizing dispositional states. This essentially involves “activating” the disposition to occurrently believe (or whatever). (In some cases it may involve activating related dispositions; the crucial point is that the activation always takes the form of an occurrent state.) So this exclusively first-personal method may be derivative from the method of knowing one’s own occurrent states. Still, it does seem exclusively first-personal.

      On the question why I don’t just use “conscious”: the main reason is dialectical. The term “conscious” is construed in so many different ways that I’d prefer not to rely on it for defining the distinction I have in mind. But as I admitted in the previous post, I haven’t yet settled on a definition of the distinction.

      You say:

      I’m not sure whether you think there can be non-conscious occurrent states. Suppose you do. Then, it seems to me, your claim is false: for it’s only one’s conscious occurrent states to which one has exclusively first-personal knowledge.

      The existence of non-conscious occurrent states wouldn’t falsify my claim. Note that the claim associated with privacy is this: Only occurrent states are private. More generally, the 5 characteristics listed in post 1 are allegedly exclusive to mentality–but the claim is not that all mental states have them. (Of course, to delineate the scope of mentality we may require that every mental state has at least one of these exclusively mental features. So if there are mental states that are not private, perhaps they fall on the “mental” side of a profound divide by virtue of possessing one of the other characteristics.)

  3. Aha. “Note that the claim associated with privacy is this: Only occurrent states are private.” I hadn’t understood that (though on reflection I probably should have). Thanks.

    Still, I wonder whether the greater clarity (or lesser ambiguity) that you hope to purchase by referring to occurrent states rather than conscious states is unstable. I think the agreement about which mental states are the ‘occurrent’ ones is merely apparent. (Confession: I’ve got a paper about this, which I can send you if you’re interested.) I suspect that most philosophers, when they speak of occurrent states, are just thinking of conscious states. In which case, ‘occurrent’ will inherit all the ambiguities of ‘conscious’ — it’s just that the ambiguities will be hidden by a patina of agreement.

    Maybe it will help to put my worry this way. Your thesis, (H), says that “If there is a profound, principled contrast between the mental and the non-mental, then this contrast is explained by (or consists in) the special features of occurrent states.” Well, if the special features turn out to be those possessed only by conscious states, then I worry about how deep or satisfying the explanation will be if it features no essential reference to conscious states.

    On the other hand, maybe I’m reading the word ‘explained’ in (H) more strongly than you intend.

    • Brie Gertler

      Yes, I’d very much like to see the paper you mention — please send it.

      Just to be clear: I’m not opposed to construing “occurrent” as conscious. It’s just that I’m trying to avoid defining it this way — though given the worries I expressed earlier, about alternative definitions, I may ultimately tie it to consciousness.

Comments are closed.

Back to Top