Mindcraft is a series of opinion posts on current issues in cognitive science by Brains Blog founder Gualtiero Piccinini. Do you agree? Disagree? Please contribute on the discussion board below! If you’d like to write a full-length response, please contact editors Dan Burnston and Nick Byrd.
Recent Progress on Explaining Intentionality
Gualtiero Piccinini
Cross-posted at thecognizer.com
Mendelovici and Bourget (“Facing up to the hard problem of intentionality”, Philosophical Perspectives, 2023) argue that “not much progress has been made” towards explaining intentionality (naturalistically). I beg to differ.
They argue that an account of intentionality should (1) argue that whatever is proposed as the basis for intentionality is sufficient for intentionality and (2) make correct predictions about the cases it aims to cover. They also mention the following explananda: (3) how we manage to introspect “contents”, (4) how “contents” can play a causal role, and (5) how mental states can be about things that don’t exist. They mention referential opacity, though in the context of intensionality rather than intentionality. I would add that a good account of intentionality should explain, at the very least, (6) how we acquire original (as opposed to derivative) semantic content as well as (7) referential opacity and (8) how mental states can be directed at something false (which is not the same as being about something that doesn’t exist).
Mendelovici and Bourget’s preferred approach seems to be phenomenal intentionalism, according to which “intentionality is grounded in phenomenal consciousness”. As a general account of intentionality, phenomenal intentionalism seems to me a nonstarter for the obvious reason that much intentionality is unconscious. Sure, there is also conscious intentionality, and fully explaining conscious intentionality will involve consciousness. But unconscious intentionality needs to be explained too, and consciousness won’t help us there. So, let’s start with unconscious intentionality and add consciousness when we need it.
The most promising account is the representational theory of intentionality (RTI), according to which intentional mental states are internal representations with appropriate functional roles and semantic contents. Versions of this theory have been developed by Dretske, Fodor, Millikan, Neander, Papineau, Shea, and many others.
I don’t see (1) as a real problem. Whatever explains intentionality is ipso facto sufficient for it. So, let’s see how much RTI can explain.
Traditional versions of RTI go at least some distance in making correct predictions about relevant cases and explaining (3) in terms of introspecting representations, (4) in terms of representations playing a causal role, (6) at least to some extent in terms of tracking the environment, (7) in terms of different representations having the same contents, and (8) in terms of misrepresentations, which only works when mistakes are made (we can also represent things that don’t exist or are false on purpose; more on this below).
Several recent developments have pushed us closer to explaining intentionality:
Several authors have argued that mental representations are kinds of neural representations, which are structural representations that simulate their targets and whose content should be understood within a mechanistic framework (Lee 2021), and neural representations are not mere theoretical posits but observable entities whose reality should not be in doubt (e.g., Thomson and Piccinini 2018; cf. Piccinini 2020a).
I have proposed an account of (5) and (8), when we represent something false or that doesn’t exist on purpose (and hence there is no misrepresentation), in terms of the ability of our neurocognitive systems to recombine neural representations and track the extent to which the results of such recombination depart from the system’s background simulation of the world (Piccinini 2020b).
I have proposed an account of (4) and (6) in terms of the ability of neurocognitive systems to acquire representations via embodied, embedded, and enactive learning (Piccinini 2022).
Marek Pokropski (2023) has argued that we can integrate phenomenology and hence consciousness with the sort of mechanistic approach I just mentioned above.
I know of other exciting work, which addresses other aspects of intentionality. It is currently under review so I can’t go into details here.
Even though I think we can see how a full account of intentionality is supposed to go, much more remains to be done before we have a complete, detailed, and adequate account. As I said, Mendelovici and Bourget are right insofar as they imply that an account of conscious intentionality will have to involve consciousness. Further progress will come by further developing the neurocognitive, mechanistic version of RTI that has begun to take shape in recent years. There is a lively community already engaged in this (or a closely related) enterprise; they include Mark Bickhard, Krystyna Bielecka, Pawel Gładziejewski, Jonny Lee, Corey Maley, Manolo Martinez, Marcin Miłkowski, Marek Pokropski, and Nick Shea, among others.
Lee, J. (2021). “Rise of the swamp creatures: reflections on a mechanistic approach to content.” Philosophical Psychology 34, 805–828. doi: 10.1080/09515089.2021.1918658
Piccinini, G. (2020a). Neurocognitive Mechanisms: Explaining Biological Cognition. Oxford: Oxford University Press.
Piccinini, G. (2020b). “Nonnatural Mental Representation,” in What Are Mental Representations?, eds. K. Dolega, T. Schlicht, J. Smortchkova (Oxford: Oxford University Press), 254–286. doi: 10.1093/oso/9780190686673.003.0010
Piccinini, G. (2022). “Situated neural representations: Solving the problems of content.” Frontiers in Neurorobotics, 16: 1–13.
Pokropski, M. (2023). Mechanisms and Consciousness: Integrating Phenomenology with Cognitive Science. London: Routledge.
There is a typo in the third word 🙂
Thanks for pointing that out. I fixed it 🙂
Phenomenal Intentionality (PI) or phenomenal Consciousness (PC) certainly have limitations. One can’t see the inner cognitive structures, mechanisms and even sometimes content (e.g. Freudian unconscious sexual content), and that’s why we develop cognitive science. In modern times.cogniti e neuroscience technologies like use of fMRI, PET scans, EEG help us to see what is happening inside the brain when we think, e.g do lexical decision tasks.
These technologically assisted interventions can have implications for deciding the nature and cognitive mechanism for deciding how concepts (and rational thoughts) are represented (and constituted) inside brain. In what format? As Fodorian language-like, abstract propositional format (proposed by Fodor, Pylyshyn, endorsed by functionalists like early-Putnam, Dennet, etc), or as concrete, experiencial modality specific formats (as proposed by Kosslyn, and developed further by Barsalou)?
A widespread ERRONEOUS belief (and propagated hugely by the school of Rationalism) in (largely innate, abstract) reason, with a presumed excess to the structures and dynamics of mind, and representstion of mental content as amodal abstract representations has culminated in misdirection of philosophy of mind for ages since Plato. And this all seems quite akin to theology than (genuine critical, empiricist) philosophy.
These intuitions seems to be based on commonsensical observation like doing self-talk in language, and thinking of personal level mental acts to be working at the subpersonal levelled cognitive descriptions as well. E.g. self-talk in language seems to be the basis of assuming a Fodorian LOT. Similarly , NOT being able to phenomenally experience (for most of people, most of times) the sentences like ‘lemon is sour’ or ‘apple is sweet’ seems to be operative in Fregian postulation of a mysterious 3rd realm (not the mental, not the physical realm) where “abstract” , “non-experiential” thoughts reside.
Frege was simply wrong. Embodiment informs us today that they ARE EXPERIENTIAL! But their experiences simply are not strong enough experiencial simulations to reach to the personal level conscious experience!
Hence, doing philosophy simply in terms of Phenomenal Consciousness or Intentionality is not merely limited but often (though not always) misdirected. Philosophy needs to be done in consultation with empirical sciences like Cognitive Neuroscience, Psychology, Anthropology, etc, and certainly not in the armchair A-PRIORI speculative manners.
This is an interesting proposal, to which I hope to respond. Can you give the citation for Thomson and Piccinini 2018 please?
Thanks for your interest. A revised version appears in Piccinini 2020a. Here is the original:
Thomson, E. E. and G. Piccinini (2018). “Neural Representation Observed.” Minds and Machines 28(6): 1–45.
As someone wearing both biomedical science and philosophy of mind hats I agree with the gist that ‘operational’, or non-phenomenal, intentionality is best seen in RTI terms following Dretske and Fodor. I am not convinced, however, that introducing ‘functional roles’ in versions of Teleosemantics helps. I am also doubtful that terms like embedded and embodied enlighten rather than confuse. Operational meanings of brain signals will depend on vastly complex interrelated dynamic covariation patterns both within brain and outside and I see no chance of a snappy theory condensing the real situation any more than an introductory chapter on computers will explain the architecture of a current Apple chip. In practice the meanings of brain signals will evolve during life in relation to an environment, for sure, but a neuron in a vat could evolve the same way and enjoy the same patterns of meaning if hitched up to electrodes mimicking all the complexities of the same dynamic covariations. We may be fast approaching that situation with the use of mobile phones.
You talk of structural representations being observable entities but I think that may be wishful thinking, at least in terms of having any idea of how the structure works. As Gyorgy Buzsaki has pointed out, these structural representations must consist of maybe 1000 signals all operating together in a combinatorial way in some computational event. Until the signals converge on such an event there is no possibility of combined meaning or representation (at least as a causal event). Brain signals only come together in individual neuronal dendritic trees and there may be several billion of these. There is no one place where signals are integrated, there are billions and they are likely to be at all sorts of levels at once, which again makes any simple account unrealistic.
To understand the basics of how structural representations work we need to know how signals co-arriving at a nerve cell can generate an overall meaning for that cell through some form of local physical inter-relation within the dendritic tree. Nobody so far has any real idea where to start with this. Individual nerve cells must in fact be able to perform very non-trivial computational functions – they would not have tens of thousands of input channels if they couldn’t.
I agree that phenomenality is a different issue, but it may be the only practical way in to making a start on understanding structural representation. So I can see both sides of the argument.
Thanks for sharing your thoughts. FWIW, it’s uncontroversial in the philosophical literature that for a systems of internal states to be representations they need to play a certain functional role (cf. W. Ramsey, Representation Reconsidered), and mainstream neuroscientists have pretty decent methods for assigning representational content to neuronal activity.
Gualtiero, you know as well as I that what is ‘uncontroversial’ in one literature may be considered garbage in another. Most biomedical scientists would consider the claims about function either empty or misleading. ‘Function’ has one of the worst reputations as a word in science. Basically, the claim is at best an unhelpful tautology but one that, if you were a physician, you would realise fails on a regular basis. Science cannot make use of such concepts.
And as for decent methods for assigning representational contents to neuronal activity, my point is that they have absolutely no idea how that content is combined and computed over. Which means that we cannot build any theory that could be tested about how intentionality works. So our theories about it depending on dynamic co-variations, whether or not you call it situatedness or embeddedness or whatever, are no more than a priori assumptions that we could have made from an armchair.
“As a general account of intentionality, phenomenal intentionalism seems to me a nonstarter for the obvious reason that much intentionality is unconscious.”
This is a meaningless claim unless “intentionality” is defined in such a way as to allow unconscious intentionality. Piccinini does so by appealing to “the representational theory of intentionality (RTI), according to which intentional mental states are internal representations with appropriate functional roles and semantic contents.” This is not intentionality in Brentano’s sense. So, his claim of progress equivocates on the usual sense of intentionality. That sense, involving “aboutness” or intrinsic reference, requires the operation of something capable of knowing what it is about, viz. consciousness.
Since the RTI has not explained how we can be aware of representations, it has not explained how we can introspect them. Having and processing contents is not knowing contents.
Progress has been made in understanding how data is neurally represented and processed, and in determining its functional and semantic role. The problem is that that role is not intentional in Brentano’s sense. Instead, Piccinini uses the bait and switch tactic so often employed by “neurophilosophers.” It takes a term defining an aspect of first person experience and redefines it in the conceptual space of third person experience. The result is at best a correlate of the first person concept.
Neural representations certainly play a causal role in mental function and semantics, but unless we stop denigrating concepts reflecting first person experience as “spooky” or “supernatural,” no progress will be made toward understanding Brentano’s intentionality. As I made clear (Polis, 2023), intentional operations cannot be reduced to physics because it lacks intentional effects. Neither can it be reduced to an algorithm because calculations produce quantities, not intentions. Still, intentionality is as much an experiential aspect of nature as physicality.
Polis, D.“The Hard Problem of Consciousness & the Fundamental Abstraction,” Journal of Consciousness Exploration & Research, 14, no. 2 (January 2023): 96-114.
Hi Dennis (if I may), thanks for sharing.
There seems to be a miscommunication. IDK where you see me denigrating concepts reflecting first-person experience as “spooky” or “supernatural”. FWIW, I’ve published at least 5 or 6 papers on first-person data and the like, I’ve published defending a nonreductive, noncomputational account of phenomenal consciousness, and I’m pursuing a book project on introspection and phenomenology. Even in my post, I conceded that understanding conscious intentional states will require involving consciousness.
As to what Brentano meant by “intentionality”, I don’t know why it matters. This is not a debate about Brentano. If Brentano thought that there are no unconscious intentional states, he was just wrong about that. Don’t you think we have unconscious beliefs, desires, fears, and so forth?
Hi Gualtiero,
You asked “Don’t you think we have unconscious beliefs, desires, fears, and so forth?” This is a mixed bag. I see beliefs as propositional commitments, and such commitments as requiring conscious of their contents. I can see a motivation for beliefs in unconscious associations encoded in the neural net, but associations are not propositions.
We can have desires in the sense of biological needs/drives without being aware of them. I can be thirsty or hungry before realizing that I am. I see these as physiological, not intentional states. I can imagine an amoeba varying its food-seeking behavior based on its physiological state, and so attribute “desire” to it despite its lack of a nervous system. Again, such states can motivate intentional states or we can simply become aware of them, adding an intentional aspect to the state. The same can be said of fears. Our biochemical state can change in response to certain stimuli, but it would be unparsimonious to bring intentionality into the explanation before consciousness was involved.
So, I would say that “beliefs, desires, fears, and so forth” involve physical precursors, but are not intentional until we are conscious of them. The precursors are important, but understanding them is not progress toward understanding the transition to intentionality.
My primary concern is that broadening the use of a technical term makes precise discourse harder. There is a precise difference between physical states, which have no intrinsic reference, and Brentano’s notion of intentional states, which do.
Because physical states can signify, there is a tendency to confuse them with intentional states . Yet, in order for them to do signify, we must first grasp something of their physicality — say the shape of a stop sign or that these marks form the word “apple.” In the language of Scholastic material logic, such are “instrumental signs .” On the other hand, intentional realities, as instruments of thought, signify transparently. We do not first have to recognize that the concept is an idea, or that it is represented by a certain neural state, before it signifies apples. In the same terminology, these are “formal signs.”
Returning to intentionality, the “aboutness” of Brentano’s concept is that of formal signification, while that of neural states is, at best, instrumental signification — by studying pulse rates, connectivity and neurotransmitter concentrations we might eventually discover what a brain state is “about,” but that would make it an instrumental, not a formal, sign and so would not be progress toward understanding intentionality.
As for denigrating concepts reflecting first-person experience as “spooky” or “supernatural,” the criticism was general, not personal. I encountered the terms in my research I apologize for any misunderstanding.
Thanks for engaging with our paper, Gualtiero!
David and I certainly agree that much work has been done on explaining representation of the sort that is invoked in the mind-brain sciences. In our paper, we are neutral on what exactly are the appropriate desiderata for an account of this sort of representation and on how much of a realist we ought to be about it, and we recognize that the desiderata and approach that you suggest are not unreasonable and that thinking about them can enable us to continue to make progress in this area.
But we are not quite sure that what you’ve described is progress in explaining intentionality. In our paper, we distinguish between the “hard” and “easy” problems of aboutness. The problem of offering an account of the kind(s) of representation invoked in the mind-brain sciences is one of the “easy” problems (other “easy” problems include problems surrounding the notion of aboutness involved in folk psychological explanation and the intensionality (with an “s”) of mental state ascriptions). The “hard” problem, in contrast, is the problem of explaining the phenomenon of intentionality, a phenomenon that we notice in mundane, everyday cases like that of thinking that 2+2=4 or perceptually experiencing a blue cup before us. This is the notion of intentionality that Brentano’s oft-cited passage points towards and that many (but not all) naturalists working on intentionality explicitly claim to target, at least in their initial characterizations of their targets. One of the claims of our paper is that this “hard” problem of aboutness—the problem of explaining intentionality—is conceptually distinct from the “easy” problems, including the problem of explaining representation in the sense operative in the mind-brain sciences. (Note that it could still turn out that intentionality is the same thing as the mind-brain scientific kind of representation, but this is something that would have to be argued for, not something we should assume at the get-go.)
In our paper, we argue that in order for a naturalistic account of intentionality to be properly motivated, we would need to show that (1) the proposed explanatory base for intentionality is indeed metaphysically sufficient for intentionality and (2) the view is empirically adequate in that its predictions are correct about the types of cases it aims to cover and does not make any other incorrect predictions. Our worry is that, while much discussion focuses on alternative desiderata, more work needs to be done to motivate the claims that (1) and (2) are satisfied. This is particularly problematic because the claims face prima facie challenges. (1) is challenged by explanatory-gap style arguments, while (2) is challenged by many everyday examples of intentionality such as the perceptual representation of colors.
Where one might worry with your reply is with the claim that (1) and (2) are satisfied by these new approaches. Regarding (1), you write, “I don’t see (1) as a real problem. Whatever explains intentionality is ipso facto sufficient for it.” The second sentence is true, but in order to show that we’ve explained intentionality, it is not enough to offer an account that satisfies other criteria. We need to offer an account that explains the everyday phenomenon. In order to do this, we would have to show that the proposed explanatory base of intentionality can account for this phenomenon. So, what you’ve said is not enough to motivate the claim that (1) is satisfied.
Your reply makes a common move, which is to set aside cases of intentionality involving consciousness and to focus on unconscious and subpersonal cases. We discuss this move in the paper. We are happy to grant that it could turn out that consciousness is unrelated to intentionality, and we are also happy to grant that this methodology might turn out to be fruitful. But in employing this methodology, we need to be mindful of the theoretical possibility that the features we identify in unconscious, subpersonal cases are distinct from the feature of interest in thoughts and experiences. Insofar as our target is the Brentanian, everyday phenomenon, the one we notice in having thoughts, perceptual experiences, and so on, an account of intentionality should explain that phenomenon (perhaps, initially, only in unconscious, subpersonal cases and in personal-level, conscious cases only with the help of a theory of consciousness). In other words, in order to clearly be offering an account of intentionality, we need to be assured that in the unconscious, subpersonal case we are still talking about the phenomenon that we initially targeted and not something else. That we are talking about the targeted phenomenon is often motivated by pointing to certain abstract features of intentionality (in the Brentanian sense)—for example, the possibility of misrepresentation—and showing that this is also present in unconscious, subpersonal cases. But in our paper we argue that the presence of such abstract features can be found in the absence of intentionality, so more work needs to be done to show that we are not changing the subject. (If, of course, the target is not the Brentanian sort of intentionality, then there is no disagreement with us—you are presumably addressing one of the “easy” problems, which is distinct from the problem of intentionality.)
The upshot of our paper is that naturalists should either face the problem of intentionality head-on by explicitly motivating and blocking or addressing challenges to (1) and (2) or clearly focus on the “easy” problems. We, personally, think the more promising strategy is the latter.
A tangential point: You dismiss phenomenal views of intentionality because you take them to be unable to explain unconscious intentionality. How to account for unconscious intentionality is a point of disagreement amongst phenomenal intentionalists, with interesting work being done by David Pitt, Uriah Kriegel, Terry Horgan and John Tienson, and many others. On some views, (at least some) unconscious states have phenomenal intentionality. On others, (at least some) unconscious states have intentionality that is derived from that of actual or merely possible conscious states. On others, (at least some) unconscious states don’t have intentionality at all—instead, they might have “representation” of the sort that is the target of some of the “easy” problems of aboutness. Our own favoured view is that personal-level unconscious states have something akin to derived intentionality (though it is importantly different from the original intentionality of conscious states) and that subpersonal states have mere “representation”. (For an overview of these positions, see our “Consciousness and Intentionality” in The Oxford Handbook of the Philosophy of Consciousness (2020) and our SEP article “Phenomenal Intentionality.” For a detailed defence of my favored position, see my The Phenomenal Basis of Intentionality (OUP, 2018).)
Angela, thanks for your kind response. There is a lot to say. I’m going to try to add some clarifications about where I’m coming from.
I’m interested in intentionality. I may not think of intentionality exactly as you do, but I am certainly not especially interested in what you call the easy problems. If you insist on using your terminology, I’m interested in the hard problem of intentionality.
I didn’t mean to dismiss phenomenal intentionalism. I meant to point out that phenomenal intentionalism is not going to help much with unconscious intentionality and, in fact, the views about unconscious intentionality that you list don’t sound very helpful. I’ll say more: it seems to me that even conscious phenomenal states play a functional role in our mental life, and understanding that functional role is going to require the kind of computational/functional/mechanistic theory that I’m trying to contribute to. So, we need the kind of theory that I’m working on for both unconscious and conscious intentionality.
I also conceded that conscious intentionality is important and will not be fully explained or accounted for without getting consciousness involved. I think this is important and phenomenal intentionalists deserve credit for reminding us of it.
I don’t think the personal-subpersonal distinction is gonna help understand intentionality or anything else that is substantive. It’s a distinction between ways of describing the mind (“stances”, if you will), not a distinction between types of mental states or structures.
I think conceivability arguments are useless for technical reasons that I have given elsewhere.
I think most informational teleosemanticists and similar theorists are under no illusion that they have a complete explanation of intentionality, including the intentional aspect of phenomenal experiences. That would require a complete explanation of consciousness, and who has that?
On the flip side, it would be unfair to restrict the problem of intentionality to the problem of explaining the intentional aspect of phenomenal experiences, ignore all the other aspects of intentionality, and then use this restriction to dismiss all the progress that’s been made so far. I think it’s important to recognize the progress made in explaining aspects of intentionality to date, including the recent work that I mentioned in my post. So my main goal was to call attention to (what I consider) exciting recent progress on (what I consider aspects of) intentionality.
That said, I look forward to reading more of your work and that of other phenomenal intentionalists when time permits.
Thanks for your thoughtful reply, Gualtiero! I will take a look at your paper on zombies and also look forward to digging deeper in your other work.
I’m not sure we really disagree on unconscious intentionality. And I’m fine with ditching the personal/subpersonal talk for these purposes. I do think we need to treat unconscious propositional attitudes like beliefs and desires differently than unconscious representations of the sort posited by cogsci, neuroscience, etc. In the case of the latter, I agree with you that the best way to make sense of the representational features of these unconscious states is in terms of some naturalistic view.
Where we disagree, I think, is on what conscious “representation” amounts to. Here, I think, we have something entirely different in kind than what we find in the unconscious case—and not just because it’s conscious. This is not to say that conscious states might not also have the features present in unconscious states (they might track things, play various functional roles, etc.). Rather, the claim is that they have an entirely different feature. This is what we point to when we wonder how we can think about something, etc.
Our aim is not to dismiss the work that has been done on unconscious states—which we don’t disagree with. But insofar as we anchor our discussion of intentionality in these everyday cases (e.g., by citing Brentano or everyday examples), some further work would need to be done to show that unconscious states have that very same feature. Otherwise, we risk changing the subject.
Maybe we can all agree that there is an unconscious representation-like thing and a conscious representation-like thing and that it is a substantive claim that those two things are the same thing.
Dear Angela,
You make a convincing case in reply. Nevertheless, as indicated in comments above, I worry how easy it is for us to say ‘Maybe we can all agree there is an unconscious representation-like thing and a conscious representation-like thing and that it is a substantive claim that those two things are the same thing.’
I absolutely agree with the spirit of that but in reality my neuroscience colleagues at UCL do not have the faintest idea what these representation-like things might be. They are very happy to admit it. People like David Marr and Horace Barlow produced some nice speculations about 50 years ago but nothing tangible has transpired. That might seem surprising but consider this question.
Would one of these representation-like things involve a million neurons, or a thousand, or five, or just one? Mainstream neuroscience is completely at a loss about this. How would the meanings combine to form scenarios from propositions (which they would need to)? And about this.
So, even if your phenomenal questions are harder, there are some pretty hard questions about propositional meaning attached to representations even without that.
I just wanted to add that how to use the term “intentionality” is, of course, a terminological question. The substantive claim we are making is that conscious cases have a feature that is difficult to explain and that is what we are interested in when we use everyday examples and the famous Brentano passage to fix on our target. If this is what we aim to explain, more work needs to be done to motivate the claim that an explanation of the representation-like feature of unconscious states makes any contact at all with the problem of explaining the representation-like feature of conscious states.
Thanks. It looks like we agree quite a bit. I also agree that conscious intentional states have many features that unconscious ones lack; at the very least, they are conscious. I need to learn more about what you think conscious states have beyond their being conscious.
It occurs to me that there is also an important difference between perceptual states, which have a phenomenal character, and discursive thoughts and motor intentions, which often lack phenomenal character. IDK whether you see it that way.
I would like to stress again that conscious intentional states also have features that unconscious intentional states also have, such as being able to be directed at false or nonexistent things. This is certainly one of the features that Brentano noticed, and I do think we’ve made progress in explaining them.
I am unable to understand how unconscious states, which I see as purely physical, are “directed to” anything. They are certainly part of a causal chain, but it seems anthropomorphic to class physical causality as intentional — unless one sees the laws of nature as divine intentions. They are not human intentions.
You say “I don’t think the personal-subpersonal distinction is gonna help understand intentionality or anything else that is substantive. It’s a distinction between ways of describing the mind (“stances”, if you will), not a distinction between types of mental states or structures.”
The problem is that from different stances, we see different aspects of the same reality — just as viewing the front of a house gives us a different projection (the front elevation) than that obtained from viewing the back of a house (the rear elevation). So, different stances, while viewing the same whole, develop different information about that whole.
Here we have the third person projection contributing to information encoding and processing, and the first person projection giving us data on how that same information becomes actually known. So, the two stances give us data on different kinds of operations.
Dennis, I agree that different perspectives provide complementary information in useful ways. I would just add that the personal-subpersonal distinction is usually understood as the distinction between what we, as external observers, attribute to a person as a whole versus what we, as external observers, attribute to the parts of the person. This is not the same as the first-person/third-person distinction.
Thank you for the clarification on the personal-subpersonal distinction. I knew it as the mereological distinction, not under that name.
As for Freud, of course I agree that we have unconscious inclinations and drives. I just think it unparsimonious to invoke intentional concepts when unconscious tendencies can be explained physically until we are conscious of them.
Thank you for your patient engagement.
Hi Dennis,
Thanks for clarifying. I think it’s time to agree to disagree. I think it’s pretty uncontroversial at least since Freud that there are unconscious intentional states. But it’s good to know that there are people, like you, who don’t see it that way.