Wu, Movements of the Mind. Post 1: The Structure of Agency.

(See all posts in this series here.)

Movements of the Mind (MoM) is about the structure of agency. It also gives a theory of attention. Indeed, it also provides a theory of psychological bias. For good measure, it argues that intention is a type of memory, linking it to working memory. These issues are illuminated through mental action, things we do in our heads. The book draws on a wealth of empirical work but begins with philosophical foundations. Chapter one is available for free here. You can read the introduction on Google Books.

Chapters 1 – 4 set theoretical foundations, (a) a theory of the psychological structure of action; (b) a theory of attention and (c) a theory of intention as a dynamic, practical form of memory. It then applies the theory to three types of mental agency: (1) biased attention, (2) deductive reasoning and (3) introspection of perceptual awareness. I view part of philosophy of mind as part of cognitive science, and this is true of understanding agency, attention and intention. Throughout, I draw on well-established results from many levels of philosophical and empirical analysis.

The first part of the book concerns the structure of action and attention. In the first two posts, I bring out the main theses of Chapter 1 and 2 in their weakest, least contentious formulations. These should be amenable to many viewpoints and sufficient to do interesting work. The four themes I bring out are (1) the Selection Problem as the structure of agency, (2) the role of a bias in solving the Problem, (3) the crucial distinction between automaticity and control, and (4) the centrality of attention as guiding action.

Chapter 1 provides a deductive argument that every action is a solution to a Selection Problem (Chp. 1.2). The conclusion is entailed by the contrast between reflexes and actions. Indeed, I argue that it is a metaphysically necessary feature of agency. Here, I offer a weaker reading sufficient to do philosophical work, so should appeal to most readers. Most actions of philosophical concern, say in ethics, epistemology, aesthetics (etc.), amount to solutions to Selection Problems.

What is a Selection Problem? The many-many version is the most familiar (Chp. 1.2). Here’s a two-by-two case:

Figure. 2×2 Many Many Problem mapping two input psychological states to two behavioral outputs. © Wayne Wu.

We often face (at least) two possible targets of action, whether in perceiving, thinking, or remembering. For each target we (perceptually, cognitively…) take in, there are (at least) two things we can do in response to each. Typically, at a time, we cannot do all available actions. One action must win out, if we are to do anything at all. This description fits many actions of interest.

How is the Problem is solved? Here, I introduce biases, internal factors that explain specific solutions to the Problem, why specific actions are executed. These are necessary if there is to be action at all (Chp. 1.5). I argue that we should not focus on the causes of action (Chp. 1.9) but instead must understand action’s internal structure. Intentions are not causes of action but action constituents that bias solving Selection Problems (I discuss relevant biology in Chp. 1.6 – 1.7). Further, while in my discussion, intention has priority in the order of explanation, there are many other biases, and in many ways, those biases, automatic biases, are of vital interest.

“Automatic” is a technical term in the theory, one necessary for completely characterizing action. The term is often invoked in a non-technical way but there’s an unnoted paradox of automaticity that suggests an incoherence in our conceptual scheme for action (Chp. 1.4):

  1. Intentional actions exemplify control.
  2. Intentional actions exemplify automaticity.
  3. Control and automaticity are incompatible.

On (1) and (2), consider intentionally reaching for a glass. That you reach for that glass is under your control. As Anscombe put it, it is intentional under that description. Yet that your reaching for the glass lasts a specific time or involves a specific wrist rotation is an automatic feature in that you don’t intend to generate that specific rotation or duration. Fortunately, your motor system “takes control”. Yet given (3), (1) and (2) cannot both be true.

Why (3)? It is a central assumption in psychology. The well-known type 1/2 (system 1/2) theory of cognitive processes expresses that assumption: there are controlled processes and there are automatic ones. The paradox driven by intentional actions show that the control-automaticity distinction cannot be a general way to divide processing kinds.

My solution renders the claims compatible, yielding technical notions of automaticity and control. Automaticity and control are relative to features F of an action at a time or range t. So (3) is true in that at time t, an action feature F cannot be both controlled and automatic. Of course, an action at a time can exemplify a variety of features, some controlled, others automatic. What determines whether a feature F is controlled is whether F is in the content of the intention (more on the dynamics of intention in a few posts). Otherwise, F is automatic. So, we have actions as intentional under feature-descriptions and automatic under other feature-descriptions: automaticity under a description. At the subject level, in agency, there is no sharp division between automatic and controlled processes.

I emphasize automaticity because it is a pervasive part of action and skill, it is philosophically significant, and philosophical theories of automaticity lag well-behind accounts of control. And of course, there’s the paradox of automaticity. In the next post, I apply automaticity to attention but I invite philosophers to use the technical notion of automaticity. To ignore it, invoking automaticity in a theoretically casual way, is to fall back into the paradox.

8 Comments

  1. Dennis Polis

    “Why (3)? It is a central assumption in psychology. The well-known type 1/2 (system 1/2) theory of cognitive processes expresses that assumption: there are controlled processes and there are automatic ones. The paradox driven by intentional actions show that the control-automaticity distinction cannot be a general way to divide processing kinds.”

    There is a confusion here between decision and execution. In the example, what is intentional is the decision to begin a process which is largely automatic. When different feature descriptions are incompatible, they cannot be describing the same target. So, there is no conflict between (1) and (2) because intentionality and automaticity are cannot be attributes of the same process, but must describe different sub-processes within a holistic act.

    Even at the same time, an act can be both intended and automatic. Say I am running and tired. The mechanics of running are automatic, but the concurrent commitment to continue is intentional. Part of the phenomenology is being missed because there is often an ongoing intentionality concurrently and automatically executed.

    Instead of being temporally distinguished, the processes are distinguished by their theater of operation. Decisions and ongoing commitments are intentional (in Brentano’s sense) operations, while executions are physical operations. As physics lacks intentional effects, intentional operations cannot be reduced to physical operations. (See my “The Hard Problem of Consciousness & the Fundamental Abstraction” JCER 14, 2, pp. 96-114.)

    • Wayne Wu

      Thanks for your thoughts. I’m not seeing a conflict and much of what you say in paragraphs two and three that are uncontroversial are consistent with the post. For example, when you say, “an act can be both intended and automatic”, you aren’t disagreeing with me. That’s the point of affirming (1) and (2). They key is to focus on features of processes and not processing kinds.

      • Dennis Polis

        Thank you for your prompt response. Sadly, we agree less than you suppose.

        Perhaps I misunderstood, but it seemed to me that your position is:
        (a) Since (3) is true, an action cannot be simultaneously automatic and intentional; and
        (b) This may be resolved by a model in which actions are first intentional, and then automatic, but not both at once.

        Such model is incompatible with examples, such as marathon running, where ongoing commitment results in concurrent automatic movement. These require a model in which intentional and automatic processes are different, concurrent aspects of a coordinated, holistic action. While I have read only your post, it seems to be laying foundations, so that the choice of model is critical to what will follow.

        Your response addresses neither simultaneity, nor the ongoing relation between the intentional and physical theaters of operation underlying human agency. I wonder if you would address these points?

        • Wayne Wu

          Yes, you’ve misunderstood the position. You are correct that since (3) is true, then (1) and (2) cannot both be true. That’s the apparent paradox (only “apparent” because it can be resolved). But the resolution is not what you infer (your (b)). The solution I propose, which certainly is not the only possible solution is to relativize automaticity and control to features, properties, of processes, namely actions. So among features of actions are that it is directed at a certain object, that it occurs over a period of time, with a specific kinematic profile etc. One action has all of these properties, some of which can be controlled, others automatic. What is disallowed is that the same feature/property is automatic and controlled at the same time/temporal range. That’s the import of (3). I hope that clarifies.

        • Wayne Wu

          That said. if these issues engage you, please read the book. I’m sorry OUP has not yet figured out how to make Chapter 1 free, but you can read the entire intro on Google Books. Hopefully Chapter 1, which discusses the issues that you are commenting on in much detail will be made available soon.

          • Dennis Polis

            Thank you for your explanation. I will read your Introduction.

            Given my running example, in which both the intention and the physical action occur over the same period of time, and are directed to the same object(s), using the same kinematics, it would seem that the distinguishing feature is that control is defined in terms of an intentional state, while the resulting automatic response is physically defined — in terms of a kinematic profile. Thus, one belongs to the intentional theater of operation and the other to the physical theater. For certainly when I intend to run, I intend to run as I actually run (i.e. with the same kinematic profile), even though I may wish to run better.

Comments are closed.

Back to Top