Empirically-Informed Approaches to Weakness of Will: A Brains Blog Roundtable

Weakness of will is a traditional puzzle in the philosophy of action. The puzzle goes something like this: 

FOLK PSYCHOLOGICAL THEORY: If, at time t, an agent judges that it is better to do A than B, and she believes she is free to do A, then, provided she tries to do either at that time, she will try to do A and not B.

WEAKNESS OF WILL: An agent judges that it is better to do A than B, believes that she is free to do A, but tries to do B.

But taken together, these statements are inconsistent. FOLK PSYCHOLOGICAL THEORY precludes the possibility of weakness of will (as characterized in WEAKNESS OF WILL), but WEAKNESS OF WILL asserts that it occurs. So can WEAKNESS OF WILL be possible, and if so, how? 

Philosophers since Davidson have approached the puzzle of weakness of will from the perspective of philosophical folk psychology. As in FOLK PSYCHOLOGICAL THEORY, philosophical folk psychology refers to philosophical theories describing human behaviors in terms of mental states such as intentions, beliefs, and so on (Lewis 1972, Stich and Nichols 2003).These theories are broadly realist in nature: they hold that people really experience mental states such as beliefs and desires. They further hold that our everyday descriptions of these mental states are roughly true. These analyses precisify and systematize these descriptions to develop full-fledged theoretical accounts of action. 

There are numerous metaphysically- and pragmatically-oriented reasons for continuing to work within the framework of philosophical folk psychology. The metaphysically-oriented reasons are widely defended – as are criticisms of these reasons – and emphasize, among other features, philosophical folk psychology’s unsurpassed predictive power. The pragmatically-oriented reasons include, for example, the fact that no matter how theoretically dissatisfying one finds philosophical folk psychology, or how theoretically dissatisfying one expects to find it in the future, it is reasonable to continue working with the framework while it remains the best and most extensive account of action on offer.  

Still, even if we think of philosophical folk psychology as the truest theory of action currently available, we can and perhaps should remain open to the possibility that it may need to be refined or revised in the future. In fact, we probably should expect to refine or revise our concepts of what it means to ‘believe,’ ‘desire,’ and so on. This is where computational and empirical theories come in. We can use our best available computational and empirical theories to inform and even constrain our philosophical folk psychological theories and, by extension, our philosophical folk psychological theories of puzzles such as that of weakness of will.

This is the approach the authors participating in this roundtable have taken here. There are a variety of views on offer (in alphabetical order): 

I propose to replace the philosophical folk psychological notion of desire with the technical notions of reward and value, drawn from reinforcement learning. I then use these notions to argue that weakness of will is not only possible, but that there are in fact multiple kinds of weakness of will

Nora Heinzelmann argues that delay discountingtheory offers a powerful model for weak-willed behavior, describing and predicting how an agent’s preferences change over time and at what point they will reverse. 

Neil Levy defends a judgment shiftaccount of weakness of will modeled on the dysregulation of the mid-brain dopaminergic system in drug addiction. 

Agnes Moors defends a dual process model with a parallel-competitive architecture, based on the idea that stimulus-driven and goal-directed processes can both be automatic.

Chandra Sripada endorses a robust faculty of Will, arguing that it is the only theoretical l view with the resources to explain the phenomenon of weakness of will. He then argues that, as a matter of empirical fact, we in fact have a robust faculty of Will as a central part of our psychology.

Zina Ward takes a critical approach, noting that a view which depends on the partitioning of the mind “is only as solid as the partitions it relies on.” Ward further raises an important consideration regarding the puzzling and/or irrational nature of weakness of will, asking, “Is it possible for naturalists to preserve the “puzzle” of weakness of will? And should we even try?” 

We are excited to discuss these views here at the Brains Blog. You can read each contribution by clicking on the author’s name below. Thanks to everyone for participating!

***

[expand title=”Julia Haas:”]

Reward, Value, and Weakness of Will

In my overview to the roundtable, I suggested that there are good, pragmatic reasons for both continuing to work within the framework of philosophical folk psychology and for selectively refining and revising it. In my more focused contribution here, I propose that we make just such a selective revision by replacing the philosophical folk psychological notion of desire with the technical notions of reward and value. I argue that once we adopt such a framework, we can show that weakness of will is not only possible, but that there are actually several kinds of weakness of will.

Making the change

Timothy Schroeder (2004) proposes that we explain the ‘essence’ of the philosophical folk psychological theory of desire in terms of reward learning. On this reward-based view, “to have an intrinsic (positive) desire that P is to use the capacity to perceptually or cognitively represent that P to constitute P as a reward,” where the concept of reward is used in the sense of reinforcement learning and associated branches of computational neuroscience (henceforth, the decision sciences) (2004, p.131).

My proposed amendment to philosophical folk psychology holds the same theoretical commitments as Schroeder’s reward-based theory: desire is expressed in terms of the neuroscientific notions of reward and value. But my amendment goes a step further. Rather than continuing to ‘nest’ the notions of reward and value within the notion of desire, it proposes that we instead replace the philosophical folk psychological notion of desire with the technical notions of reward and value. Once we do so, we can draw on these notions directly, and thereby explicitly harness their explanatory power to help address puzzles and debates in the philosophy of action.

Developing the view

Building on Schroeder’s (2004) view, then, I propose to recast deliberation and choice, traditionally expressed in terms of beliefs and desires, in terms of reward and value instead. Here, reward is defined as the intrinsic desirability of a given stimulus. Value, for its part, is defined as the total, expected, future reward associated with a given state. On this framework, an agent’s goal is to find an optimal policy that allows her to maximize value through interactions with her environment. The major achievement of the decision sciences over the past several decades has been to elucidate specific computational strategies that allow an agent to discover and use such value-maximizing policies.

Presently, evidence suggests that the mind relies on at least three such computational strategies and, by extension, three semi-autonomous decisions systems for choice and action. The hardwired (‘Pavlovian’) system relies on automatic approach and withdrawal responses to appetitive and aversive stimuli, respectively. The habitual (‘model-free’) system gradually learns and caches positive and negative state-action pairs. And, the deliberative (‘model-based’) system explicitly represents and selects from possible state-action pairs, often described in terms of a decision tree. For a more detailed discussion of these systems, see here.

How do these multiple systems interact? To start, each of the systems partially evaluates the action alternatives. Simultaneously, each system generates an estimate of how accurate its prediction is relative to the decision problem at hand. For example, the habitual system typically coordinates choice in familiar, complex settings, because, due to its caching procedure, it typically has a higher accuracy profile in familiar decision problems, even if it predicts a lower overall value than do either its deliberative or hardwired counterparts. By contrast, the deliberative system typically coordinates choice in novel, high-risk settings, because it can explicitly represent the different alternatives, even if it predicts a lower overall value than do either its habitual or hardwired counterparts.

These estimates, or accuracy profiles, are then compared, and the system with the highest accuracy profile is selected to direct the corresponding valuation task. These interactions are thus thought to be governed by an ‘accuracy-based’ Principle of Arbitration:

PA: Following partial evaluation, the system with the highest accuracy profile, i.e., that system most likely to provide an accurate prediction of expected value, relative to the decision problem at hand, directs the corresponding assessment of value (Daw et al. 2005, Lee et al. 2014).

Notably, according to PA, the choice of which system is used in a given context depends on its accuracy profile, and not on its prediction of value. One consequence of this is than an agent can assess an action A as being preferable to action B, but still do B – a feature that will be important to explaining how weakness of will is possible.

Applying the revised view to the puzzle of weakness of will Once we adopt the reward- and value-based account, we discover that we can show weakness of will is not only possible, but that there are actually multiple kinds of weakness of will, elicited by interactions between the different decision systems. (I present two of these kinds here – for a more detailed discussion, see Haas 2018).

Habitual weakness of will is elicited by interactions between the deliberative and habitual systems. Recall from above that the deliberative system typically has a higher reliability measure in novel settings, since its capacity for representation allows it to predict the values of various outcomes. By contrast, the habitual system is typically more reliable in complex but familiar circumstances, where representation would be both taxing and redundant. But it is not unusual for an important aspect of a familiar situation to change. Such circumstances elicit the most basic and harmless type of weakness of will.

If an agent opts for the typically more reliable but in fact inaccurate habitual system, she experiences habitual weakness of will. The information provided by the deliberative system enables the agent to know what the best course of action would be under these recently changed circumstances. Yet since the situation is broadly familiar, the habit-based approach has a high past cumulative success rate, or an overall high reliability measure. Thus, PA allocates the habitual system for action selection. Hence, the agent is aware of the most up- to-date and appropriate course of action in advance, but falls back on her less beneficial, habitual counterpart (for an extended discussion, see Daw et al. 2005). The agent experiences the signature phenomenology of weakness of will: she recognizes that it would be preferable to do A, but feels herself choosing to do B. Habitual weakness of will accounts for several paradigm cases of weakness of will, including Davidson’s classic example of brushing his teeth. Davidson describes lying in bed at night and realizing that he’s forgotten to brush his teeth. All things considered, he thinks to himself, it would be better just to stay in bed and get a good night’s sleep; but he gets out of bed and goes to brush his teeth anyway (1970, 30). Here, the move to reward and value makes sense of the otherwise perplexing action: the habitual system dictates that brushing one’s teeth is a reliably valuable course of action, even though the circumstances make it the less valuable action overall.

In pruning-based weakness of will, by contrast, the deliberative and hardwired systems interact to issue a suboptimal choice. This second type of interaction occurs when an option represented in the deliberative system’s decision tree elicits either a positive or negative hardwired response. For example, a strongly positive alternative, represented early on the decision tree, can cause the entire opposing branch of the tree to be ‘pruned,’ rejected, so that it is no longer considered (Huys et al. 2012). Conversely, a strongly negative alternative, represented at an early node of the decision tree, may cause the entire subsequent branch of the tree to be pruned, so that none of the subsequent values are computed or represented (Huys et al. 2012).

This kind of pruning-based weakness of will can account for another classic case of weakness of will, described by J.L. Austin (1956/7, 198). Austin begins to represent his options in the form of a decision tree, consisting of the ‘eat’ and ‘don’t eat’ alternatives. The full tree would represent the ensuing consequences of both alternatives. But the highly appealing nature of the bombe, represented early in the tree, engages the hardwired system and causes the tree to be pruned, such that the non-bombe alternatives are no longer considered. Austin thus represents the negative consequences of the only remaining alternative – eating the bombe – but has no other alternatives left to pursue. He eats the bombe. The specific phenomenal features of Austin’s experience are also accounted for. Although it is the product of the hardwired system, pruning-based weakness of will needn’t be rushed or impulsive. Rather, the pruning of the decision tree simply eliminates certain choices, and thereby leaves the less optimal alternative to be pursued “with calm and even with finesse.”

Discussion

The key thing to notice is that, in both of these cases, we can explain weakness of will without arriving at a puzzle represented by the inconsistent statements. We can generalize this observation. Recall from the overview that the original puzzle was framed by the following claim:

FOLK PSYCHOLOGICAL THEORY: If, at time t, an agent judges that it is better to do A than B, and she believes she is free to do A, then, provided she tries to do either at that time, she will try to do A and not B.

But in light of our shift from desires to rewards and value, we can now revise this claim so that it reads:

REVISED PFP THEORY: If, at time t, an agent’s decision system D values some option A more highly than some option B, he believes he is free to do A, and PA allocates D for action-selection, then if he tries to do either at that time, he will try to do A and not B. Notice, though, that if we now add…

WEAKNESS OF WILL: In cases like Gene’s above, an agent judges that it is best to do A at t, believes he is free to do A at t, but, despite trying to do something, does not try to do A at t.

…then the two statements are no longer inconsistent. On the plausible assumption that judgment is underwritten by the deliberative system, weakness of will occurs in any case in which PA allocates either the hardwired or habitual systems for action selection.

In this way, a careful revision to philosophical folk psychology allows us to arrive at a novel understanding of the nature – and kinds – of weakness of will.

References

Austin, J.A. (1956/57). A plea for excuses. In Austin (1979), 175-204.

Austin, J.A. (1979). Philosophical papers, 3rd ed., J. O. Urmson and G. J. Warnock (eds.), Oxford: Oxford University Press

Davidson, D. (1970). How is weakness of the will possible?. In Davidson (1980), 21-42.

Davidson, D. (1980). Essays on actions and events. Oxford: Clarendon Press.

Daw, N. D., Niv, Y., & Dayan, P. (2005). Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nature Neuroscience, 8(12), 1704-1711.

Huys, Q. J., Eshel, N., O’Nions, E., Sheridan, L., Dayan, P., & Roiser, J. P. (2012). Bonsai trees in your head: how the Pavlovian system sculpts goal-directed choices by pruning decision trees. PLoS Computational Biology, 8(3).

Lee, S. W., Shimojo, S., & O’Doherty, J. P. (2014). Neural computations underlying arbitration between model-based and model-free learning. Neuron, 81(3), 687-699. [/expand]


[expand title=”Nora Heinzelmann:”]

Delay discounting and weakness of will

Delay discounting theory has been widely used in the empirical sciences as a model for weakness of the will, and philosophers have followed suit (Zheng 2001, Mele 1987). In the following, I shall point out some limitations of this approach, arguing that although delay discounting theory cannot capture certain cases of weak-willed action, it is a powerful model for many core cases.

1. Delay discounting as a model for weakness of the will

Delay discounting theory was initially developed within classical economic theory (Samuelson 1937) whose framework axiomatically assumes that preference, choice, and (expected) value or utility [1] are congruent (von Neumann & Morgenstern 1953 [1944], Becker 1976, Steele & Stef ́ansson 2016). Very roughly, when presented with several options or prospects, an agent prefers one over another iff it is more desirable or choice-worthy [2]. Preference is thus a relation between options. Furthermore, the framework assumes that the agent chooses the most preferred option, which is reflected in their behaviour and thus empirically measurable. In typical circumstances, a utility function numerically represents the preference relation. This function maps utility or value onto options.

Delay discounting is the change of preference concerning a prospect, and thus of its expected value, with its temporal delay: typically, the more delayed the reward is, the lower is its discounted value, and, other things being equal, an agent chooses the earlier of two delayed rewards.

But the agent may discount different rewards with different rates (e.g., she might discount food more steeply than gold) or assign different delays to them, i. e., she might expect to get one reward earlier than another. Hence the relative preference between two rewards may change over time as the delay elapses. In some situations, this can lead to a preference reversal: whilst the agent prefers A over B at some point in time, she prefers B over A at another time.

Let us now see how this approach can model weak-willed behaviour. A classic example for weakness of the will is yielding to the temptation of delicious but unhealthy food (e. g., Aristotle, Nicomachean Ethics 1147a31–1147b6, Mele 2012, p. 37). Assume an agent knows that she will ruin a good night’s sleep if she overeats at dinner. Imagine she anticipates that she will be tempted to overeat tonight and therefore resolves to skip dessert. Facing the two options of having dessert or foregoing it for a good night’s sleep, she thus prefers the latter, she values it more, and she would choose it; i. e., if she were to order dinner now, she would not order dessert. Within our discounting framework, the discounted value of the dessert is lower than that of the discounted value of a good night’s sleep.

But now the delays elapse and dinnertime arrives. The agent is suddenly very tempted to have dessert. Imagine she orders it after all, knowing that she is thereby sacrificing a good night’s sleep. Within the discounting framework, the expected value of the dessert is now greater than that of the sound sleep. This is possible because sleeping well is more delayed than enjoying dessert. Moreover, the agent probably discounts food and sleep at different rates, i. e., she discounts the dessert less steeply than the sleep. Hence she reverses her preferences and gives in to temptation; she performs a weak-willed action.

2. Limitations

Delay discounting theory is extremely powerful because it cannot only model but also predict choice with econometric precision. Not surprisingly, it has been used to model a wide variety of weak-willed behaviour in different disciplines, from dieting to procrastination to addiction (Ainslie 2001, Kirby, Petry & Bickel 1999). However, this approach also has at least three limitations. I shall take them in turn.

First, recall that choice, value (utility) and preference are intimately linked within the economic framework of delay discounting theory. That is, it is axiomatically impossible to describe weakness of will in any of the following ways:

1. An agent chooses A over B but values B more than A .

2. An agent prefers A over B but chooses B over A .

3. An agent values A more than B but prefers B over A .

Thus, philosophers relying on evidence that presupposes the economic framework should refrain from describing weakness of will in any of those three or similar ways. This seems to be primarily a terminological issue.

Second and relatedly, delay discounting theory is not suitable to model what we may call instantaneous weakness of the will, where an agent chooses one option and at the same time also judges that it would be better to do something else. For instance, imagine our agent summons a waiter to order dessert but simultaneously tells her friends that she really prefers a good night’s sleep. Such a case seems almost inconceivable from the perspective we are considering here. It seems we would have to deny that both actions are genuine: either the agent is not wholeheartedly ordering dessert, or her utterance is not sincere.

Third, there is a more technical issue with the delay discounting model as we have conceived of it so far. Take the classic ‘marshmallow case’: a child is presented with a choice between either having one marshmallow immediately or waiting for a second one to arrive (Mischel & Ebbesen 1970, Mischel, Shoda & Rodriguez 1989). Imagine the child resolves to wait, indicating that she values two marshmallows later more than one now. However, after having waited for a while, she gives in to temptation and eats the one marshmallow. This reveals that she prefers one marshmallow now over two marshmallows later. But if that is so, she should not initially have started to wait, when the delay was even greater. It seems as if the child’s impatience increased over time. Delay discounting theory does not permit this: a delayed reward becomes more valuable as the delay elapses, not less. As a result, researchers have proposed to replace classical delay discounting models with more complex ones. These allow for such preferences reversals by incorporating, e. g., agents’ sensitivity towards uncertainty or visceral impulses (Laibson 1997, Dasgupta & Maskin 2005). Philosophers seem well advised to consider those models when drawing on empirical evidence in their work.

3. Conclusion

Delay discounting theory offers a powerful model for weak-willed behaviour: it can describe and predict how an agent’s preferences change over time and at what point she will reverse them. They are thus well suited to account for many prime examples for weakness of the will. Still, they are so far not able to account for all of them. For one thing, the approach might find it difficult to describe instantaneous cases of seemingly weak-willed behaviour. Future research may be able to address this and other issues.

Notes

[1] I use the two interchangeably.

[2] We set aside special cases like indifference.

References

Ainslie, G. (2001). Breakdown of will, Cambridge University Press, Cambridge.

Aristotle (n.d.). Nicomachean ethics, ed. I. Bywater (1894), Oxford University Press, Oxford.

Becker, G. (1976). The economic approach to human behavior, University of Chicago press.

Dasgupta, P. & Maskin, E. (2005). Uncertainty and hyperbolic discounting, The American Economic Review 95(4): 1290–9.

Kirby, K., Petry, N. & Bickel, W. (1999). Heroin addicts have higher discount rates for delayed rewards than non-drug-using controls, Journal of Experimental Psychology 128(1): 78–87.

Laibson, D. (1997). Golden eggs and hyperbolic discounting, Quarterly Journal of Economics 112(2): 443–77.

Mele, A. (1987). Irrationality, Oxford University Press, New York.

Mele, A. (2012). Backsliding, Oxford University Press, Oxford.

Mischel, W. & Ebbesen, E. (1970). Attention in delay of gratification, Journal of Personality and Social Psychology 16(2): 329–37.

Mischel, W., Shoda, Y. & Rodriguez, M. (1989). Delay of gratification in children, Science 244(4907): 933–8.

Samuelson, P. (1937). A note on measurement of utility, The Review of Economic Studies 4(2): 155–61.

Steele, K. & Stef ́ansson, O. (2016). Decision theory, in E. N. Zalta (ed.), The Stanford encyclopedia of philosophy, winter 2016 edn, Metaphysics Research Lab, Stanford University.

von Neumann, J. & Morgenstern, O. (1953 [1944]). Theory of games and economic behaviour, 3 edn, Princeton University Press, Princeton.

Zheng, Y. (2001). Akrasia, picoeconomics, and a rational reconstruction of judgment formation in dynamic choice, Philosophical Studies 104(3): 227–51.

[/expand]
[expand title=”Neil Levy:”]
Nessa takes herself to judge that she ought to work on the paper she promised for a volume tonight, but she watches Netflix instead. She acted intentionally and her behavior was reasons-responsive. If someone had paid her to work on the paper, or if the viewing options had been worse she would have acted in accordance with her judgment. In the past, when Nessa has faced conflicts very like this one, she has sometimes acted in accordance with her judgment and sometimes she has given in to the temptation to do something more immediately rewarding. She experiences her behavior as voluntary. But why would an agent voluntarily and intentionally act contrary to their own best judgment?

Philosophers often seem to assume, at least implicitly, that there is some one thing going on in cases like this. They are explained by a mismatch between the strength of Nessa’s desires and the motivational power of her judgments, or by a failure of her deliberative system to override her impulsive system, and whatever explains this case will equally explain her previous failures, as well as mine and yours. I suspect that’s a mistake (in part, but only in part, because I suspect that the deliberative system is a virtual system, implemented on the basis of non-deliberative mechanisms, and without motivational powers of its own). There are many things going on in cases we describe as involving weakness of the will and no unified account can capture them all. Nor should we think that there is any neat way of picking out which mechanism is at work in what case: I doubt introspection can reliably distinguish between cases.

Perhaps in some cases, agents like Nessa have desires that are out of line with the motivational strength of their judgments. Perhaps some are explained by a depletion of self-control resources (a story I have defended elsewhere (Levy 2011), but which I am no longer confident about, in the light of the failure of crucial experiments to replicate; Hagger, Chatzisarantis et al. 2016). In many, I suspect, the agent is simply wrong about what her own best judgment is. Our access to our mental states is patchy and often indirect, and I doubt that ‘best judgment’ has a proprietary phenomenology. Very often, we discover what our judgments are by seeing what we are disposed to say, and we may sometimes be disposed to report what we think we ought to do, in the light of certain considerations (moral or prudential, in particular), mistaking that for what we think we judge we ought, all things considered, to do.

I want to highlight a different mechanism for weakness of will here, however. This mechanism bears some resemblance to the mistake account just sketched, inasmuch as when the agent acts, she doesn’t act contrary to her own best judgment. However, when she articulated the judgment, she may indeed have reported her own best judgment. In the meantime, she has changed her mind. She has experienced what Richard Holton (2009) calls judgment shift.

The inspiration for the judgment shift account I want to advance here comes from the observation that the mid-brain dopaminergic system is dysregulated in drug addiction in a way that suggests a malfunction of its role as a prediction system. In a series of classic experiments Schultz and colleagues (Schultz et al. 1992; Schultz et al. 1997) demonstrated a spike in phasic dopamine in response not to reward (as sometimes thought) but to unexpected reward. The system adapts to reward delivery if it is expected. Thus, the spike in phasic dopamine occurs not in response to the reward itself, but to a signal that the reward is about to be delivered (assuming that signal is itself unexpected). Because drugs of addiction (including alcohol and tobacco) in one way or another increase the availability of dopamine, the very currency which the system uses in prediction, it cannot adapt to these rewards. A spike in phasic dopamine occurs in response both to a signal that a reward is available (cravings are very sensitive to cues of drug availability in addicts) and to the reward itself. The result is that the initial signal is registered as too small, relative to the actual reward value of the good, and the system attempts to adjust by increasing its magnitude. But no such adjustment will ever be enough, so long as the reward increases dopamine directly.

When the person encounters a cue predictive of drug availability, she therefore experiences a powerful signal that the world is better than expected. This signal constitutes what is called, in the predictive processing literature, surprisal, or prediction error (Levy 2014). The brain is an error minimization machine. There are always multiple ways of updating the model of the world, or acting on the world, to minimize error, but sometimes the most accessible path involves updating the judgment, such that the person shifts from judging that this is a world in which drugs are not to be consumed to judging that they ought to be consumed. She may have sincerely judged that she ought not to consume, but now she makes a different judgement: prudentially, but not all things considered, drugs are not to be consumed, or drugs ought not to be consumed all things considered except when….(the day has been especially trying; it would be rude to refuse….We are excellent confabulators).

Might this same mechanism be at work in ordinary cases of weakness of will; i.e. cases in which the prediction mechanism is working as designed? While the idea is speculative (doubly speculative, in fact: the empirical evidence in favor of the judgment shift account of addiction underdetermines the account), it is, I think, plausible. When agents with properly functioning prediction systems encounter cues of reward availability, these cues constitute a prediction error relative to a model on the world on which those particular rewards are not to be consumed (now). Because the system is functioning as designed, the signal will be weaker than in the case of the addict. That mean it is less likely to be passed up the processing hierarchy, less attention grabbing (attention is a mechanism for making errors more precise), more easily minimized by action, physical or mental. But a sufficiently large and sufficiently precise error must be minimized, and one way of minimizing it is model update: adopting a model of the world according to which the reward should be consumed (now). The prediction update story may thereby underwrite judgment shift.

How do we best prevent judgment shift, in ourselves and others? There is more than one way to do this. If we are strongly committed to a higher-order model that conflict with such shifts (say a conception of ourselves as continent), we are probably less likely to experience them. Of course, it is not trivial to get ourselves to be genuinely committed to such a model. It is a doxastic state, and committing to it probably requires generating a great deal of evidence that it is true: that is, actually resisting temptation. So there’s a chicken and egg problem here. An easier way to avoid judgment shift, and one that I expect is routinely utilized by ordinary people (with or without realizing that’s what they’re doing) is structuring one’s activities, or the environment in which one acts, so that cues that signal reward availability are not encountered when they’re unwanted (Levy 2017). This strategy, too, may not be easy to implement, inasmuch it requires a great deal of control over one’s environment and one’s activities. Others may attempt to wrest control from us, sometimes with the aim of ensuring we encounter cues that we might prefer to avoid. It’s not for nothing that adverts are placed in locations where they are hard to avoid or that supermarkets place the high margin and highly tempting candy bars near the check outs. They’re trying to induce judgment shift in us.

For most of us, self-control depends in important on some degree of control over our environment. Even for the continent – those with a self-model to which they assign a high probability, inconsistent with weakness of will – self-control may depend genetically on control over the environment: to possess such a self-model at least typically is going to depend on prior possession of evidence in its favor, and that evidence will be probably be gathered through successful control in a control-conducive environment. A control conducive environment, in turn, is likely to be one that is reasonably under our control. If anything like this is true, then self-control is very importantly a political issue. Who controls our environments? Who lacks the resources for such control, and instead finds themselves buffeted by external forces? By focusing only on internal mechanisms for control, and even more by blaming those who suffer from self-control failures, we turn ourselves into the ideological footsoldiers of oppression.

References.

Hagger, M. S., Chatzisarantis, N. L. D. et al. 2016. A Multilab Preregistered Replication Of The Ego-Depletion Effect. Perspectives on Psychological Science 11: 546–573.

Holton, R. 2009. Willing, Wanting, Waiting. Oxford: Oxford University Press.

Levy, N. 2011. Resisting Weakness of the Will. Philosophy and Phenomenological Research 82: 134-155.

Levy, N. 2014. Addiction as a Disorder of Belief. Biology & Philosophy, 29: 315-225.

Levy, N. 2017. Of Marshmallows and Moderation. In Walter Sinnott-Armstrong and Christian B. Miller (eds.) Moral Psychology, Volume 5: Virtue and Happiness. Cambridge: MIT Press.

Schultz W., Apicella P., Scarnati E., Ljungberg T. 1992. Neuronal activity in monkey ventral striatum related to the expectation of reward. Journal of Neuroscience 12: 4595-4610.

Schultz W., Dayan P., Montague P.R. 1997. A neural substrate of prediction and reward. Science 275: 1593-1599

[/expand]
[expand title=”Agnes Moors:”]

Towards a goal-directed account of weak-willed behavior

People often engage in behavior that is not in their best interest – so-called suboptimal or irrational behavior. Examples (of partially overlapping categories) are action slips (e.g., typing in one’s old password), costly or recalcitrant emotional behavior (e.g., costly aggression, avoidance in fear of flying), arational behavior (e.g., slamming the door out of anger), impulsive/compulsive behavior (e.g., costly aggression, addiction), and weak-willed or akratic behavior. The latter category comprises behaviors that people engage in despite the fact that they have a correct judgment that other behavior would be more optimal. People know smoking and drinking is bad for them, but they do it anyway. They know exercising is good for them, but they fail to get off the couch.  

To explain suboptimal behaviors, theorists have turned to dual process models (Heyes & Dickinson, 1990), in which behaviors can be produced either by (a) a stimulus-driven process in which a stimulus activates the association between the representation of stimulus features and the representation of a response  (S→[S-R]→R) or (b) a goal-directed process in which the values and expectancies of the outcomes of one or more behavior options are weighed before an action tendency is activated (S → [S:R-O → R]→R). Note that the term habit is used for stimulus-driven processes that have been installed via an overtrained operant conditioning procedure in which performance of the same response given a certain stimulus repeatedly led to the same outcome. This procedure is supposed to stamp in the S-R association while the outcome is no longer represented or activated. 

To diagnose whether a process is stimulus-driven or goal-directed, researchers typically conduct a devaluation test or a contingency degradation test (Hogarth, 2018). If devaluation of the outcome of a behavior or a degradation of the likelihood that the behavior will lead to the outcome subsequently reduces (/does not reduce) the behavior, it is inferred that the value and expectancy of the outcome of the behavior were represented (/not represented) and hence that the behavior was caused by a goal-directed (/stimulus-driven) process. 

Traditional dual-process models have a default-interventionist architecture, with the stimulus-driven process as the default and the goal-directed process as an occasional intervenor. This architecture is rooted in the idea of a trade-off between automaticity and optimality, which are both tied to the computational complexity of the processes. Stimulus-driven processes are seen as simple and therefore automatic but at the same time rigid (because they are insensitive to outcome devaluation and contingency degradation) and therefore more likely to produce suboptimal behavior. Goal-directed processes, on the other hand, are seen as complex and therefore nonautomatic but at the same time flexible (because they are sensitive to outcome devaluation and contingency degradation) and therefore more likely to produce optimal behavior. The automatic nature of the stimulus-driven process makes it the default process. However, because this process is more likely to lead to suboptimal behavior, it must sometimes be corrected by the goal-directed process. The problem is that this goal-directed process is seen as nonautomatic, which means that it can only intervene when there is enough opportunity, capacity, and/or motivation (Moors, 2016; Moors & De Houwer, 2006). When these factors are low, the organism has no choice but to switch from the goal-directed process to the stimulus-driven process. 

Empirical evidence for the default-interventionist model comes in the form of dissociations showing that when opportunity, capacity, and/or motivation are high, the goal-directed process determines behavior whereas when these factors are low (because of time pressure, stress, sleep deprivation etc.) the stimulus-driven process takes over (e.g., Schwabe & Wolf, 2009; but see below).  

According to the traditional model, people continue to smoke against their better judgment because their  behavior is caused by a stimulus-driven process (a habit) in which the sight of cigarettes directly activates the tendency to smoke, and the goal-directed process that induced the tendency to refrain from smoking (at the service of a health goal) was unable—“too weak”—to successfully intervene (Baumeister, 2017; Everitt, Dickinson, & Robbins, 2001; Tiffany, 1999; Wood & Rünger, 2016). 

Recently, I proposed an alternative dual process model (Moors, 2017a, b; Moors, Boddez, & De Houwer, 2017; Moors & Fischer, in press) with a parallel-competitive architecture, which is rooted in the idea that stimulus-driven and goal-directed processes can both be automatic (for arguments, see Moors et al., 2017). If both processes can be automatic there should be a substantial number of cases in which they operate in parallel and enter in competition with each other. The model moreover assumes that when both processes do enter in competition, the goal-directed process should win because goal-directed processes are automatic and optimal whereas stimulus-driven processes are only automatic and the system should prioritize the process with the most advantages. In this model, the goal-directed process is the default determinant of behavior and will determine the lion share of behavior whereas the stimulus-driven process determines behavior only in exceptional cases. 

In line with this view, evidence for stimulus-driven processing based on habit learning seems to be  weak. In animal outcome devaluation studies, for instance, stimulus-driven drug seeking behavior is confined to highly specific conditions such as a no-choice procedure (a single action leading to a single outcome: drugs), and it is fragile in that it is quickly taken over by a goal-directed process when the devalued outcome (which is left away in the test phase) is reintroduced (Hogarth, 2018). These conditions do not resemble those in human natural environments: We always have a choice between drugs and natural rewards, and we never get a break from the devalued outcome (e.g., hangover, guilty feelings). 

In humans, evidence for the role of stimulus-driven processing in drug seeking and other behavior is even weaker (Hogarth, 2018). A recent series of five attempts to find evidence for habit learning in humans failed (de Wit et al., 2018). Several prior studies that did report evidence for stimulus-driven processing used a task design (the “fabulous fruit game”; de wit, Niry, Wariyar, Aitken, & Dickinson, 2007) that turned out to be unsuitable for detecting stimulus-driven processing (De Houwer, Tanaka, Moors, & Tibboel, 2017). 

Evidence for goal-directed processing is abundant, not only as the determinant of optimal but also as the determinant of suboptimal behavior such as drug seeking (see reviews by Hogarth, 2018). Before citing some of this evidence, let me first explain how the alternative dual process model accounts for suboptimal behavior. To do this, I need to elaborate a bit more on the goal-directed process. 

The goal-directed process does not occur in isolation, but can be embedded in a cycle, starting with a comparison between a stimulus and a first goal (which is the representation of a valued outcome). If the stimulus and this first goal are discrepant, a second goal arises which is to reduce the discrepancy. This can be done either by acting to change the actual stimulus (i.e., assimilation), by changing the first goal (i.e., accommodation), or by changing interpretation of the stimulus (i.e., immunization), depending on which of these broad strategies has the highest expected utility. If the person chooses to act, the specific action option with the highest expected utility will activate its corresponding action tendency (which can be considered as a third goal). Once the action tendency is translated in an overt action, it produces an outcome, which is fed back as the input to a new cycle. The cycle is repeated until there is no discrepancy left. Note that all steps in the cycle can in principle occur outside of awareness. 

People have many goals, some of which may conflict with each other. In the alternative model, self-regulation conflicts are not understood as conflicts between a stimulus-driven and a goal-directed process, but as conflicts between two goal-directed processes. If a health goal does not manage to make a person quit smoking, there must be another goal that is either more valued and/or that has a higher expectancy of being reached that wins the competition. Examples of other goals are a hedonic goal, a social goal, the goal for autonomy, etc. (Baumeister, 2017; Kassel, Stroud, & Paronis, 2003). 

The multiple-goal argument has implications for the methods used to diagnose whether a behavior is caused by a stimulus-driven or goal-directed process. The upshot is that if a behavior is found to be insensitive to the devaluation of one outcome, it may still be driven by another outcome. If stress leads to eating beyond satiation, this may not indicate that eating was stimulus-driven (as argued by Schwabe & Wolf, 2009), but perhaps that eating is a strategy to reduce stress. Recent work has started to re-examine purported evidence of stimulus-driven processing by manipulating the fulfilment of other goals (see also Kopetz, Woerner, & Briskin, 2018). 

Critics may object that agents of weak-willed behavior typically do not attribute a higher value to their hedonic goal than to their health goal. And even if they do (but are unaware), this does present a puzzle. 

One part of the solution is to consider that for many substance users, the hedonic goal is not the goal to add extra positive sparkles to an already bearable existence, but rather the goal to reduce unbearable stress or negative affect. What good is it to strive for a long, healthy life, if you cannot even survive another day?

Another part of the solution lies in the fact that behaviors are not only chosen on the basis of the values of their outcomes, but also on the basis of the expectancies that they will lead to these outcomes. So even if a smoker does not attribute a higher value to her hedonic goal than to her health goal, she may still estimate that one smoke is more likely to produce pleasure now than that abstinence is likely to avoid bad health later. 

One may argue that behavior that is still at the service of some goal, does not qualify as truly suboptimal (because it contributes to goal satisfaction), but merely appears to be suboptimal. A smoker may be correct in estimating that one smoke is more likely to produce pleasure now than that abstinence is likely to avoid bad health later. Thus, the optimal decision would be to have another smoke, even if—paradoxically—an accumulation of such optimal decisions is likely to result in a suboptimal outcome in the end (Ainslie,  1938). There is room for debate of course whether optimality should only be considered in relation to “the end” or whether it is also optimal to satisfy short-term goals (Lemaire, 2016). 

The reason why many decisions appear suboptimal is that the goal that is driving the behavior is not always obvious or conflicts with societal norms. A smoker may not realize how intense the stress is that she tries to alleviate by smoking, or she may not be aware that smoking is partly an act of rebellion in a way to affirm her autonomy (against “nanny state” coercion, Le Grand & New, 2015). 

But goal-directed processes may also be invoked to explain trulysuboptimal behavior. Such behavior can be understood as the result of noise or sand in the wheels of the goal-directed cycle. Several things may go wrong in this cycle. 

First, a person may fail to notice a discrepancy between the stimulus and a goal and hence the need to take action, or she may fail to notice that a stimulus has different implications for different goals. However, this is typically not the place where things derail in the case of weak-willed behavior. 

Second, a person may choose a less than optimal behavior option because more optimal behavior options are simply lacking from her behavior repertoire. It is possible that people who smoke to reduce their stress have not yet considered other, less costly behavior options to reduce their stress, such as vaping or yoga. 

Third, given that expectancies and values are subjective, they may not correspond to objective likelihoods and values (Tversky & Kahneman, 1993). In many self-regulation conflicts, the choice is between one behavior option (e.g., smoking) that has a short-term, certain, positive outcome (e.g., hedonic pleasure) and another behavior option (e.g., abstinence) that has a long-term, uncertain, negative outcome (e.g., cancer). All else equal, short-term outcomes are seen as more likely (i.e., availability effect) and as more positive (i.e., temporal discounting effect) than long-term outcomes. Temporal discounting happens to be more pronounced in smokers, although it is unclear whether this is a predisposing factor or a defensive consequence of smoking (Baumeister, 2017). Likewise, certain effects are seen as more likely than uncertain effects (of course), but they are also more heavily weighted (i.e., certainty effect). 

In addition to these content-less biases, smokers’ expectancies about whether smoking will lead to specific other outcomes, such as hedonic outcomes (in the form of stress reduction or the absence of withdrawal symptoms), may also be more or less accurate. There is no simple answer to the question whether smokers’ belief in the stress-reducing powers of smoking is accurate (e.g., Cook, Baker, Beckham, & McFall, 2017). There is evidence that smokers do overestimate the intensity of withdrawal symptoms, and this may encourage them to give in sooner rather than later. “If the end point will be the same, why suffer first?” (Baumeister, 2017, p. 81). 

Note that the theoretical rationality of biases and false beliefs does not need to match their practical rationality: Some in/accurate beliefs may promote/hinder goal satisfaction. For instance, optimistic illusions have been associated with increased well-being (although as always, the picture is mixed, e.g., Bortolotti & Antrobus, 2015). 

Finally, one may wonder whether it makes sense to talk about the objective value of a goal/outcome. At first sight, values are always values for a person, and so it seems that values can only be subjective.  On second thought, however, the value of any lower-order goal depends on the expectancy that it will satisfy a valued higher-order goal, and this expectancy could be more or less accurate. A person may have the goal to become rich as a strategy to achieve happiness, but this strategy may turn out to be ineffective (Ryan & Deci, 2001). Applied to the case of smoking against better judgment, a person may smoke to satisfy the goal for hedonic pleasure, but the goal for hedonic pleasure (or prioritizing hedonic pleasure over health) may turn out the be an ineffective strategy to achieve happiness.  

In sum, some cases of weak-willed behavior more properly may be categorized as strong-willed because they were driven by more valuable or more easily achievable goals, that were not always obvious to the agent, and therefore appeared weak-willed. Other cases of weak-willed behavior are best understood as stemming from errors in the evaluations of values or expectancies, but here too, the term weak-willed does not cut any ice. 

References

Ainslie, G. (2001). Breakdown of will. New York: Cambridge University Press.

Bortolotti, L., & Antrobus, M. (2015). Costs and benefits of realism and optimism. Current opinion in Psychiatry28(2), 194.

Cook, J. W., Baker, T. B., Beckham, J. C., & McFall, M. (2017). Smoking-induced affect modulation in nonwithdrawn smokers with posttraumatic stress disorder, depression, and in those with no psychiatric disorder. Journal of Abnormal Psychology126(2), 184.

De Houwer, J., Tanaka, A., Moors, A., & Tibboel, H. (2018). Kicking the habit: Why evidence for habits in humans might be overestimated. Motivation Science4, 50-59 

de Wit, S., Kindt, M., Knot, S. L., Verhoeven, A. A., Robbins, T. W., Gasull-Camos, J., … & Gillan, C. M. (2018). Shifting the balance between goals and habits: Five failures in experimental habit induction. Journal of Experimental Psychology: General147(7), 1043-1065.

de Wit, S., Niry, D., Wariyar, R., Aitken, M. R. F., & Dickinson, A. (2007). Stimulus-outcome interactions during instrumental discrimination learning by rats and humans. Journal of Experimental Psychology: Animal Behavior Processes, 33, 1–11. 

Everitt, B. J., Dickinson, A., & Robbins, T. W. (2001). The neuropsychological basis of addictive behaviour. Brain Research Reviews36(2-3), 129-138.

Hogarth, L. (2018). A critical review of habit theory of drug dependence. In B. Verplanken (Ed.), The psychology of habit (pp. 325-341). Springer, Cham.

Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and uncertainty5(4), 297-323.

Kopetz, C. E., Woerner, J. I., & Briskin, J. L. (2018). Another look at impulsivity: Could impulsive behavior be strategic?. Social and Personality Psychology Compass12(5), e12385.

Le Grand, J., & New, B. (2015). Government paternalism: Nanny state or helpful friend?. Princeton University Press.

Lemaire, S. (2016). A stringent but critical actualist subjectivism about well-being. Les ateliers de l’éthique11(2-3), 133–150. 

Moors, A., & De Houwer, J. (2006). Automaticity: A theoretical and conceptual analysis. Psychological Bulletin132, 297-326. 

Moors, A. (2016). Automaticity: Componential, causal, and mechanistic explanations. Annual Review of Psychology67, 263-287. 

Moors, A. (2017a). Integration of two skeptical emotion theories: Dimensional appraisal theory and Russell’s psychological construction theory. Psychological Inquiry28, 1-19.

Moors, A. (2017b). The integrated theory of emotional behavior follows a radically goal-directed approach. Psychological Inquiry28, 68-75. 

Moors, A., Boddez, Y., & De Houwer, J. (2017). The power of goal-directed processes in the causation of emotional and other actions. Emotion Review9, 310-318.

Moors, A. & Fischer, M. (in press). Demystifying the role of emotion in behavior: Toward a goal-directed account. Cognition & Emotion

Ryan, R. M., & Deci, E. L. (2001). On happiness and human potentials: A review of research on hedonic and eudaimonic well-being. Annual review of psychology52(1), 141-166.

Schwabe, L., & Wolf, O. T. (2009). Stress prompts habit behavior in humans. Journal of Neuroscience29(22), 7191-7198.

Tiffany, S. T. (1999). Cognitive concepts of craving. Alcohol Research & Health, 23, 215–224.

Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and uncertainty5(4), 297-323.

Wood, W., & Rünger, D. (2016). Psychology of habit. Annual Review of Psychology, 67, 289 –314.

[/expand]


[expand title=”Chandra Sripada:”]

It Is Hard To Explain Weakness Of Will If You Reject The Faculty Of Will

A good place to start in thinking about weakness of will is “Davidson’s theater”, which appears in section two of “How Is Weakness of the Will Possible”. There we find two contrasting views of mind. In one view, ascribed to Aristotle, Aquinas, and Hare, there are two actors on the stage, Reason and Passion, and in times of temptation, they duke it out. In the other view, ascribed to Plato and Butler, a third actor appears, the Will, and it is up to him “to decide who wins the battle. If the Will is strong, he gives the palm to reason; if he is weak, he may allow pleasure or passion the upper hand.”

In contemporary philosophy, the “Trio” view, i.e., the one that endorses a robust faculty of Will, has receded and now occupies the margins. This is unfortunate because it has two major advantages. First, it is the only view that has the resources to fully capture the phenomenon of weakness of will. Second, the Trio view is true—as a matter of empirical fact, we do have a robust faculty of Will as a central part of our psychology. In the rest of this post, I’ll briefly elaborate on this pair of points.

1. Capturing Weakness of Will Requires a Doubly Independent Will

Start with the claim that only a Will-based psychology can fully capture weakness of will. To see this point more clearly, I need to fill in some features of how the Will relates to the other two parts of the mind (Switching away from Davidson’s terminology, I refer to these parts as Judgment and Appetite.)

On my version of the Trio view, the faculty of Will needs to be doubly independent. First, it needs to be independent of Judgment. It will listen to Judgment and typically follow it, but importantly it needn’t. This is the decisional aspect of the Will. Second, the Will needs to be independent of Appetite. This means the Will can produce decisions about what to do that diverge from what one’s appetites push one to do. But, of course, it doesn’t stop there: We don’t form a decision contrary to our appetites and then simply hope and pray that our appetites will fall into line, like a fan of Manchester United hoping their team will score. We actively and effortfully do things—specifically we perform attentional and inhibitional mental actions—that block or otherwise modulate our appetites. This is the regulative aspect of the Will.

Suppose we have a Will on the stage that is doubly independent from the other two actors in the preceding ways. Then new plot options open up. Consider this sequence, with which we are all, perhaps unfortunately, intimately familiar: Appetite pushes us towards one course of action. Judgment recommends another. The Will, in its regulative role, can control Appetite, but it doesn’t. Instead, in its decisional role, it “gives the palm” to Appetite. Notice in this scenario, the Will is not overrun by Appetite, which is what happens in compulsion. Rather, the person does what Appetite says even when Judgment opposes because the Will—being weak—decides to let Appetite win. When all of the preceding features are in place, we have a paradigm case of weakness of will, and the Trio model is needed to get all these features in place.

2. The Cognitive Control Research Program Provides an Empirical Vindication of the Faculty of Will

A second big advantage of the Trio view is that it is true. What I mean is that there is strong evidence that we do in fact have a faculty that is doubly independent in just the way the Will is supposed to be. In contemporary cognitive science, this faculty goes by the name cognitive control and it is examined in thousands of studies in neuroscience, psychology, and psychiatry.

Much research into cognitive control consists of careful examination of “conflict tasks”, such as the Stroop task, Go/No Go task, and others. These tasks involve performing certain distinctive intentional mental actions (what I call “control actions”) in order to regulate a variety of spontaneous mental states, including: actions that arise habitually, attention that is grabbed by stimuli, memory items that are automatically retrieved, and thought contents that spontaneously pop into mind. Elsewhere, I fill in a key link: I give a comprehensive account of how self-control directed at complex states such as appetitive desires is related to cognitive control (I argue self-control consists in performing extended skilled sequences of cognitive control; see here).

So much for the regulative aspect of cognitive control. Turn now to the decisional aspect, which in recent years has really taken off as a focus of research. There is a growing consensus that subpersonal cost/benefit calculation plays a central role in decisions to exercise cognitive control (which I call “executive decisions”). One version of the view, called Expected Value of Control (EVC) theory, proposes that there are a set of cognitive routines—sometimes modeled in terms of temporal difference reinforcement learning—that continuously estimate the expected value of exercising cognitive control relative to its expected costs (see Shenav, Botvinick, and Cohen 2016). Importantly, the idea is not that the person consciously and intentionally sets out to figure out the expected value of control, but rather that these sophisticated calculations occur non-deliberatively “under the hood” and are the basis for one’s executive decisions.

EVC theory is important for an account of weakness of will because it helps us see how one’s practical judgments can come apart from one’s decisions. The overall picture is that our minds house at least two quite different ways of rationally aggregating disparate bits of information relevant to the question of what to do. There is a conscious serial way of aggregation that leads to practical judgment. There is also a way that involves EVC calculation, where the underlying calculations occur outside awareness, which leads to executive decisions. I claim that this picture delivers a moderate form of externalism: Executive decisions are tied to practical judgments (because the latter typically serve as informational inputs to the former). But the two are ultimately rooted in different aggregation routines, and they thus can diverge. When they do, the agent acts in weak-willed way.

3. Conclusion – Why Philosophy Needs to Resurrect the Will

Looking across moral psychology more generally, it is not just accounts of weakness of will that have suffered due to the abandonment of the faculty of Will. Without a Will, it is hard to make sense of strength of will, i.e., our ability to perform intentional actions to defeat our strongest desires (indeed, some philosophers claim that doing this is impossible!). It is also challenging to explain freedom of will, the freedom a person has to decide what to do irrespective of what their desires dictate. The same applies to doxastic will, our ability to intentionally regulate the formation of belief (if we have this kind of ability, which I think we do to a certain extent, then a moderate form of doxastic voluntarism follows). The faculty of Will, it seems, is absolutely everywhere in the kinds of problems that most interest philosophers. We might make more progress if we heeded the empirical evidence and gave the faculty of Will a central place in our theorizing.

[/expand]
[expand title=”Zina B. Ward:”]

Many recent attempts to naturalize weakness of will have pursued what we might call “the partitioning approach”: using distinct mental systems to characterize the phenomenon and explain what’s going on in weak-willed subjects. Although the approach has been around since antiquity, what’s different about recent partitioning accounts is that they rely on mental divisions that are empirically grounded (Levy 2011; Sripada 2010, 2014; Haas 2018). These “new partitioners” draw on research in psychology and the decision sciences to characterize and defend the mental systems they appeal to –– a great improvement over the ad hoc partitioning of Davidson (1982). Even so, I have a few reservations about the partitioning approach, which I’d like to raise here in the hope of sparking a discussion about its strengths and limitations. 

But first, the background: Levy (2011) and Sripada (2014) both offer dual-process accounts of weakness of will. Sripada distinguishes between an emotional motivational system, which produces emotional action-desires, and a deliberative motivational system, which produces practical desires. One’s action is akratic when one’s emotional action-desires and practical desires compete for control of action and the former win out. [1] Sripada situates his account within the dual-process framework, explaining that emotional action-desires are part of System 1 (S1) and practical desires part of System 2 (S2). Levy’s (2011) account is similar, although he understands weakness of will as the unreasonable revision of intentions rather than akrasia (Holton 1999). According to Levy, weakness of will is the result of ego depletion, or the “depletion of an energy source preferentially drawn on by self-control mechanisms” (Levy 2011, 136). Ego depletion causes the weak-willed subject to switch from S2 to S1. Haas (2018) builds her account of weakness of will around the “Multi-System Model of the Mind” (MSM) developed in reinforcement learning and neuroeconomics, which claims there are at least three decision systems that affect behavior: the deliberative system, the hardwired system, and the habitual system. Haas suggests that akrasiaoccurs when the hardwired or habitual systems are allocated for action selection instead of the deliberative system.

The partitioning approach to weakness of will is only as solid as the partitions it relies on. This is where my worries start. Dual-process theory has been criticized in recent years as a “convenient and seductive myth” (Melnikoff & Bargh 2018; see also Osman 2004, Keren & Schul 2009). First, it is subject to what Melnikoff & Bargh dub “the alignment problem”: different features associated with Type 1 and Type 2 processing are not aligned in the way dual-process theories claim. It is not the case, for example, that processing that is automatic is always intuitive, fast, unconscious, and efficient. There is far more mix-and-matching of the features associated with T1 and T2 processing than dual-process theories predict. Moreover, many of those features come in degrees. Processing can be faster or slower, more or less efficient, and so on. Although dichotomizing continuous properties can be scientifically useful, it also leads to a loss of information and raises the worry that the dividing line is arbitrary. Dual-process theories’ dichotomizations seem especially inappropriate given that there are different “subdimensions” of individual processing features (Melnikoff & Bargh 2018). For example, there are at least two senses in which a process can be controllable: it can be stopped after being triggered, or its output can be altered. These types of controllability dissociate. A process may be controllable in the first sense but not the second, or vice versa. This suggests that one cannot use controllability (tout court) to characterize T2 processes. 

For these reasons, I doubt that the dual-process framework provides a good foundation for naturalistic accounts of weakness of will. Dual-process theories are a product of our tendency to construct binary oppositions (Newell 1973) and our commitment to a reason-versus-passion dichotomy. They don’t reflect our true psychological architecture. There is also now reason to be skeptical of ego depletion, on which Levy’s (2011) account is based. Ego depletion has been caught up in the replication crisis, with several recent attempts at preregistered replication failing to find the effect (Hagger et al. 2016, Carruth et al. 2018 [preprint]). 

Even if the mental divisions underlying partitioning accounts of weakness of will are valid, however, a further question arises: do the accounts accurately describe cases of weakness of will? 

The new partitioners suggest that weakness of will occurs when the system responsible for reasoning and judgment is overridden by some other system (for Sripada and Levy, when S1 wins out over S2; for Haas, “any case in which… the hardwired or habitual systems [are allocated] for action selection” [Haas 2018, 15]). These characterizations of weakness of will strike me as too broad. There are situations in which S2 or the deliberative system does not control behavior but there is no weakness of will. For example, imagine that I intend to drive to a friend’s birthday party but find myself inadvertently heading to work when I leave my house because I’m driving on autopilot. My S1 or habitual system is controlling my behavior. But I am being absent-minded, not weak-willed. This shows, I think, that partitioning accounts don’t provide an accurate naturalistic characterization of weakness of will (even if they do explain what’s happening in weak-willed subjects).

Descriptions of weakness of will that rely on mental partitions also have misleading implications about how weakness can be avoided. They seem to suggest that the way to overcome akrasia or the revision of one’s resolutions is to bolster the system responsible for reasoning and judgment: to ensure that S2 or the deliberative system is in control of action. In fact, there is empirical evidence that recruiting non-deliberative capacities is one of the best ways to ensure follow-through on one’s resolutions and action in accordance with one’s judgments. This prevents second thoughts and rationalizations from getting in the way (Holton 2009). If you want to eat vegetarian, for example, you shouldn’t deliberate intensely about all the options when you go out to eat; you should try to limit your deliberation by looking only at the vegetarian dishes on the menu. Gollwitzer & Bargh (2005) give many such examples of “automaticity in goal pursuit,” showing that automatic motivations and implementation intentions can help people achieve their goals. The idea that weakness of will can often be avoided by minimizing the use of one’s reasoning capacities is an insight not naturally accommodated by partitioning accounts.

There is one last potential problem with the partitioning approach that I want to raise for discussion, but not endorse. Philosophers find weakness of will to be “puzzling, defective, or dubiously intelligible” (Stroud 2014). Some have suggested that any account of weakness of will must take care not to deny its irrational character. Let’s call this the “Irrationality Constraint” (IC): an account of weakness of will should not render it either rational or arational (Henden 2004). It’s plausible that partitioning accounts violate IC. By characterizing weakness of will as the product of causal interactions between separate mental systems, they threaten to dissolve the puzzle of weakness of will completely. This is a consequence that Haas seems to embrace, arguing that her account shows that weakness of will is “not a breakdown in the system…It is a simple byproduct of everyday decision making” (Haas 2018, 17).This leads to a deeper question about the status of IC within a naturalistic framework: Is it possible for naturalists to preserve the “puzzle” of weakness of will? And should we even try? These questions remain open, in my view, for partitioners and non-partitioners alike.

Notes

[1] N.B. Sripada (2010, 2014) is interested in willpower and self-control, and only secondarily concerned with weakness of will as the failure of those capacities.

References

Carruth, Nicholas, Jairo Ramos, and Akira Miyake. 2018. “Does Willpower Mindset Really Moderate the Ego-Depletion Effect? A Preregistered Direct Replication of Job, Dweck, and Walton’s (2010) Study 1,” posted October 26. https://psyarxiv.com/8cqpk/. 

Davidson, Donald. 1982. “Paradoxes of Irrationality.” In Problems of Rationality, 169–88. Oxford: Oxford University Press. 

Gollwitzer, Peter M., and John A. Bargh. 2005. “Automaticity in Goal Pursuit.” In Handbook of Competence and Motivation, 624–46. New York: Guilford Press.

Haas, Julia. 2018. “An Empirical Solution to the Puzzle of Weakness of Will.” Synthese, 1–21. 

Hagger, Martin S., Nikos L. D. Chatzisarantis, Hugo Alberts, Calvin Octavianus Anggono, Cédric Batailler, Angela R. Birt, Ralf Brand, et al. 2016. “A Multilab Preregistered Replication of the Ego-Depletion Effect.” Perspectives on Psychological Science: A Journal of the Association for Psychological Science11 (4): 546–73. 

Henden, Edmund. 2004. “Weakness of Will and Divisions of the Mind.” European Journal of Philosophy12 (2): 199–213.

Holton, Richard. 1999. “Intention and Weakness of Will.” The Journal of Philosophy96 (5): 241–62. 

———. 2009. Willing, Wanting, Waiting. Oxford: Oxford University Press.

Keren, Gideon, and Yaacov Schul. 2009. “Two Is Not Always Better Than One: A Critical Evaluation of Two-System Theories.” Perspectives on Psychological Science4 (6): 533–50. 

Levy, Neil. 2011. “Resisting ‘Weakness of the Will.’” Philosophy and Phenomenological Research82 (1): 134–155.

Melnikoff, David E., and John A. Bargh. 2018. “The Mythical Number Two.” Trends in Cognitive Sciences22 (4): 280–93. 

Newell, Allen. 1973. “You Can’t Play 20 Questions with Nature and Win: Projective Comments on the Papers of This Symposium.” In Visual Information Processing. New York, NY: Academic Press.

Osman, Magda. 2004. “An Evaluation of Dual-Process Theories of Reasoning.” Psychonomic Bulletin & Review11 (6): 988–1010. 

Sripada, Chandra Sekhar. 2010. “Philosophical Questions About the Nature of Willpower.” Philosophy Compass5 (9): 793–805.


[/expand]

55 Comments

  1. I really enjoyed everyone’s contributions — thanks so much to Julia for organizing this! Let me get the ball rolling on starting some discussion.

    Neil and Nora: Judgment shift looms large in the accounts of weakness of will you discuss. I tend to feel shortchanged a bit by these accounts because they seem to me to change the subject. In any case, the question still remains what we are to say about non-judgment shift cases where the agent acts contrary to an occurrent best judgment, or else show such cases don’t exist. Nora is clear that hyperbolic discounting needs to be supplemented with extra stuff to account for non-judgment shift weakness of will cases (what she calls cases of “instantaneous weakness of will”). I am curious what Neil will say about such cases.

    • Thanks for getting the ball rolling! This reminds me: All of our contributions dealt in some way with empirically-informed approaches to the mechanisms responsible for weakness of will, but of course, there’s a separate empirical line of research into our concept of weakness of will that I don’t believe any of us discussed (e.g. May & Holton 2012, Beebe 2013). I’m persuaded by this x-phi work that we tend to label both cases of acting against an occurrent best judgment (akrasia) and cases of judgment shift (Holton’s “resolution revision”) as “weakness of will.” Chandra, do you agree that both phenomena fall under the folk concept of weakness of will, but think that it’s the former that is most philosophically interesting? I’m also curious if anyone else has thoughts on this (largely distinct) strand of empirical work. (Presumably it supports Neil’s claim that we’re unlikely to be able to provide a unified account of weakness of will.)

      • Julia Haas

        I think that’s a good question about the x-phi approaches to weakness of will.

        I tend to distinguish between 1. lay folk psychological characterizations of weakness of will, i.e., the kind of thing you might think lay people refer to in January New Year’s about breaking off their resolutions, and 2. technical (philosophical) folk psychological characterizations of weakness of will, i.e., the kinds of analyses and explanations you might see in philosophy texts or journal articles, where philosophers try to provide accounts of (1.).

        One thing that I’ve often been puzzled about is that x-phi approaches tend to ask about people’s intuitions with respect to the technical, philosophical accounts. But people’s responses seem to suggest that they are really referring to the lay concept that they use in everyday conversation?

      • Nora

        Hi Zina and Julia,
        It seems to me that the folk conception is a family-resemblance concept. Depending on what a given x-phi study examines, it may aim to cover all or just some members of that family.

    • Neil Levy

      Hi Chandra,
      I’m not sure I need to *show* that cases of the sort you (rightly) say I can’t explain don’t actually exist. I’m not sure how I would go about proving that, in any case. But I don’t see any positive reason to believe they do exist. As I say in the OP, I don’t think we have good access to our mental states. Actually, that’s far too weak: I think we have only interpretive access to our mental states. So sure, people say things like “I knew I shouldn’t have been doing that”, but if by that they mean “I had an occurrent judgment with a certain content at the precise time I acted contrary to it” (and not “I had an occurrent judgment with a content like or whatever), my response is simply “you’re in no position to know that you had such a judgment.

      By the way, this may have implications for Zina’s point that we all neglect the X-phi on weakness of the will. My scepticism about our access to our mental states implies (not entails) some degree of scepticism about folk reports that X-phi (typically) systematises.

      • Neil Levy

        Oops, angle brackets seem to be an HTML code for italics. The sentence should have read “(and not “I had an occurrent judgment with a content like ANGLE BRACKET I am committed to judging this is a bad idea ANGLE BRACKET, or whatever).”

        • This other thread of the weakness of will issue that Zina raises is interesting. I think it is helpful to keep track of two projects.

          One project is conceptual analysis. We start with [S acts in a weak-willed way iff A] and the goal is to fill in A with something that captures intuitions about cases. That is an important project, and May/Holton and Mele have done super interesting work on this. I follow the debate but don’t take a side, and maybe it is at a draw?

          Another project sees the topic of weakness of will as an invitation to get clearer on the architecture of motivation: How are our minds set up motivationally speaking, and given that set-up, what kinds of dissociations are possible (can we ever act against our strongest desire; does judging best entail wanting most, etc.). This is the project that, as I see it, Socrates, Davidson, Hare, Watson are taking on. For the purposes of this project, weakness of will might as well be a technical term stipulated to mean action that contravenes occurrent judgment, and the goal is to figure out whether motivational architecture permits this to occur. I think the two projects are really very different, and when I invoke empirical findings, it is entirely in the service of addressing the second “motivational architecture” strand of the debate.

          I see Neil as somewhat engaged in both projects. When he says there is no unified account of WoW, he seems to be engaging the first project. When he says acting against one’s best judgment is not possible (putative cases always actually involve judgment shift), he seems to be engaging the second project. I take no stand on his first claim but, of course, strongly disagree with the second…

          • Zina

            I see the two projects as slightly more related: roughly, the conceptual analysis defines the explanandum for the psychological and neuroscientific investigation. This is probably a minor disagreement, but it means that I place less weight on the technical philosophical concept. Since people who undergo judgment shift (in some cases) are described as weak-willed, to understand weakness of will empirically we’ve got to get a grip on judgment shift too.

          • Julia Haas

            Zina, I take it that that’s what’s going on, too. But I’m not sure what grants philosophy this conceptual priority, and I worry that philosophy sometimes gets the conceptual analysis wrong (I’m thinking of discussions of intention here (Uithol, Burnston & Haselager 2014), as well as some stuff on weakness of will). Unfortunately, one consequence of this is that the empirical work, or the empirically-informed philosophical work, then sometimes gets held to some standards that it’s not obvious it actually has to satisfy. I suspect that that’s actually how we got around to the Irrationality Constraint you mention in your post.

          • Hi everyone. What a fascinating and rich set of posts and discussion threads! Thanks for doing this on the blog.

            I appreciate Zina bringing up the relevant x-phi literature. I tend to agree with the general consensus here: there are two projects, only one of which x-phi contributes to directly, but the projects are sometimes related in interesting ways. Let me try to add a bit more about how they can be related.

            Like Zina, I do think understanding the folk concept is sometimes important for constraining or at least understanding philosophical or scientific projects. An example from ethics: Error theorists (like Joyce and Mackie) say our folk conception of moral truths presupposes something that doesn’t exist, so there are no moral facts. Such projects are pretty constrained by the folk notion.

            Like Julia, however, I do think this can get out of hand. We can sometimes give the folk concept more authority than it deserves. An example might come from the free will literature. Sure, maybe our ordinary concept of free will is incompatibilist, but perhaps revisionists (like Manuel Vargas and Shaun Nichols) are right that we should revise the folk concept to be compatibilist (or at least still say we have free will even if the world is deterministic).

            Sometimes, though, I think the desire to carry out a theoretical project (whether philosophical or scientific) goes too far by not paying close enough attention to our ordinary concepts. I suspect this happened with the Cartesian notion of knowledge as requiring certainty. The same might be said of Fake Barn cases in epistemology. Last I checked, there’s robust evidence that people just don’t think one loses knowledge in fake barn country. In that case, it seems diverging too far from the folk notion leads to a theory we don’t care about. (So we lack schmowledge but have knowledge? OK thanks for that.)

            Now (maybe) back to weakness of will. I’m quite sympathetic to the idea that the folk notion of weakness of will is messy (kind of like the folk concept of free will, although that one may be much messier!). Given how mixed the folk concept (or word usage) is, perhaps we shouldn’t let it constrain our theorizing too much. It depends on the project, and it could go too far. Perhaps that’s part of Zina’s critique. There’s a worry that a theoretical project may start out trying to explain something like weakness of will, but then become estranged from the folk notion, abandoning any concern with it. Such accounts, however, may start to look like accounts of decision or of self-control generally, failing to explain a certain sort of decision-making or self-control we started with. Whether that’s a problem depends, to me, on what the aims of the project are.

            Anyway, just some quick thoughts on how the x-phi may be relevant. What I think this discussion demonstrates, however, is that there is a lot of fascinating and important work on this general topic, regardless of its relation to the folk concept (or the relevant x-phi)!

          • To follow up on Julia’s “Unfortunately, one consequence of this is that the empirical work, or the empirically-informed philosophical work, then sometimes gets held to some standards that it’s not obvious it actually has to satisfy. I suspect that that’s actually how we got around to the Irrationality Constraint you mention in your post.”

            I think I agree with this. In several of the contributions, there is mention of constraints that an appropriate account of WoW should satisfy.
            For instance, Chandra argues that an account of WoW should include the notion of will.
            Zina argues that an account of WoW “must take care not to deny its irrational character”. @Chandra: For me, the starting point is not *WoW*, but *the phenomena labeled as WoW*. So for me, there is no constraint to include a faculty or separate entity called the will. If it turns out that the phenomena labeled as WoW (explanandum) can be explained without a faculty of will, then the explanandum can be reconstituted (see Bechtel, 2008) and given another, more appropriate, name (e.g., cases of goal conflict in which the goal that wins is not the one that the person consciously judges to be the strongest, etc.).
            @ Zina: An account of WoW has no obligation to recognize that cases labeled as WoW *are* irrational, but rather to recognize that cases labeled as WoW *appear* irrational, and to explain why they appear in this way.

      • Neil, I like your response to Chandra about the purported existence of weakness of will without judgment shift. However, two things:

        First, given your skepticism about there being a coherent folk notion, why resist him this much? Couldn’t you just say, sure, sometimes we act contrary to our best judgment at the time of action but that this is somehow non-standard or works via a different mechanism that doesn’t require positing a faculty of will?

        Second, a minor point about your skepticism about self-knowledge. I don’t think the x-phi of weakness of will relies on people being good at reporting their mental states. All it relies on is that people are competent users of the term. Perhaps usage is messy, but that wouldn’t necessarily have anything to do with interpretive access, right?

        • Neil Levy

          Hi Josh,

          My reply to Chandra rests on inference to the best explanation. Given that we don’t seem under any genuine pressure to think that there’s anything that corresponds to acting against one’s own occurrent judgment, it’s no problem for my account that it can’t explain such actions. If there is such a thing to explain, then sure, a disjunctive approach might be explored.

          You’re right, I think: my X-phi scepticism should be finer grained. Different methodologies probe different things. We can certainly probe linguistic use (though I am sceptical that’s best done by anything corresponding to the view that they’re competent users of a term – I think analysis of linguistic corpuses are more useful, in general, than asking people to classify various cases – that invites interference from folk theories). In the case of wow, though, I think it’s likely that the widespread cultural myth of acting against occurrent judgments colours responses. We’re all folk and folk definitions are good starting points. But that’s what they are: starting points. There’s no need to regiment theory development in the light of these starting points.

          • @Neil, I am pretty confused by this statement: “Given that we don’t seem under any genuine pressure to think that there’s anything that corresponds to acting against one’s own occurrent judgment, it’s no problem for my account that it can’t explain such actions.”

            I am not sure what you mean by “judgment”, but a reasonable gloss for the sake of debate might be “a conclusion reached based on deliberation”. That looks to be what philosophers have in mind as well.

            Given this understanding of judgment, why think judgment necessarily constrains decisions. That is, people make decisions all the time, sometimes with antecedent deliberation and sometimes without. The ways they do so are studied extensively in psychology, neuroscience, and elsewhere. Given all that we know about decisions, why think they are sharply constrained by judgment, so that making a decision against an occurrent judgment is not possible? That seems to be a bold claim that would require a lot of argument/evidence. What is the evidence?

          • Neil Levy

            Sure, ““a conclusion reached based on deliberation” is a reasonable gloss on “judgment” abstracting from context. But the exchange with Josh provides a context: I mean an occurrent mental state with a content that entails (in a way that is transparent to the person) that the action they perform is one that they all things considered should not, on this occasion. In fact, that’s how you use “judgment” when you (rightly) I can’t use explain this kind of case. So I don’t know why you’re engaging in (ahem) judgment-shift now.

          • @Neil, define terms this way:

            Practical Judgment = conclusion reached by deliberation (=serial process of bringing to conscious mind scenarios and trying to assess them) about what one should all things considered do on this particular occasion.

            Decision = process (typically fairly rapid) of subpersonal aggregation of evaluative information that terminates in a person-level making of a decision and formation of an intention.

            When laid out this way, it is clear the inputs to practical judgment and decision are distinct and thus they can potentially diverge. All manner of situational influences, gut feelings, occurrent emotions, good/bad moods, etc. can directly bias decisions. They may or may not affect one’s practical judgments – there is certainly no guarantee they will.

            Suppose I am having a terrible day and crave a cigarette. I deliberate and think about my son and that I need to be around for him when he is older; that settles it right there, I *decide that* I shouldn’t have a cigarette. But the desire for a cigarette is strong. I *decide to* light one up. Looking at how practical judgment and decision are defined above, I am hard pressed to see why you are think that a shift in my practical judgment must necessarily have occurred. The best explanation seems to go like this: The desire for a cigarette didn’t influence the conclusion reached by practical judgment, but it did influence my decision, so judgment and decision diverged.

          • Neil Levy

            I’m really not sure what’s going on anymore, Chandra. I took us to be disputing an empirical issue. You claimed (rightly) that there was a conceptually possible class of cases that my view couldn’t explain. I replied that since I didn’t believe that conceptually possible class of cases actually occurs, I wasn’t worried by that. In your last comment, you gave some definitions of “judgment” and “decision”. I just don’t see how these definitions are supposed to get a grip on the empirical issue. Are you saying that these definitions *entail* that human behaviour must instantiate the combination of states I deny? Definitions rarely enable us to settle empirical disputes. They do so only when one of the disputants is confused. I am indeed confuses about many things, but can my claim that agents don’t have an all things considered state (I’m not calling it a judgment now, since you no longer like using the word like that) with a certain function and a certain functional role instantiated at the time of particular actions really be nothing more than a conceptual confusion?

          • I too am confused, Neil, and no doubt I am being dense here. Those definitions of “practical judgment” and “decision” are intended to be tentative with the hope of finding common ground and set the stage for progress. Suppose we agreed about a few key features of what “practical judgment” is, say: serial, involves conscious evaluation of scenarios, the kind of thing discussed by theorists interested in prospection/mental time travel, etc. And the same for “decision”: directly linked to intention formation, the kind of thing that is well modeled by drift diffusion/evidence accumulation models, etc. If we got that far, then the next steps are more tractable. For example, we’d have a more precise way to ask whether agents can make decisions that oppose occurrent practical judgments. But if we can’t get the ball rolling with some tentative meanings for key terms, not sure how to move forward…

          • Neil Levy

            Sorry for the late reply, Chandra – and thanks for the clarification. I guess in the end I don’t think the distinction you aim to draw is a helpful one in this territory. I’m not convinced that the distinction tracks anything genuine as you – tentatively – lay it out, because I’m not sure that “practical judgments” exist. But I don’t think it’s what is useful to focus on in any case. We aim to distinguish two things: judging (whatever that is) that one ought to phi, and then failing to phi because one has changed one mind, and judging that one ought to phi and not phi-ing despite failing to change one’s mind. A further distinction we need is occurrent/dispositional. I think it is relatively uncontroversial that our occurrent judgment can dissociate from our dispositional judgments. With these distinctions in mind, my claim is that weak willed actions (so-called) never involve failing to phi despite failing to change one’s mind, where the judgement involved is occurrent. I think it is uncontroversial that most theorists hold a conflicting view (though some would not call behaviour of the kind I say does not occur weak willed – I have in mind Holton). You are quite right that saying this much does not give anything like a full picture of the mental states that are involved as the proximate causes of behavior. And you are right if you are implying I’m not in a position to give such a full picture. But then those I oppose are usually not in a better position. I take the project you’re inviting me to engage in as one that both sides might work on together. But I’m afraid it’s not a project I feel I can contribute to.

      • Nora

        Hi Neil,
        How would you interpret cases where the tempting option is available immediately, for instance, marshmallow experiments? There should be no prediction error/surprise because the cues of reward availability do not change between the person’s initial decision to resist temptation (e.g., pick two marshmallows) and her yielding to it (e.g., eat the one marshmallow).
        Another tangential question: is a version of your 2017 paper in Sinnott-Armstrong and Miller’s edited collection somewhere available online?
        Many thanks in advance!

        • Neil Levy

          Hi Nora,
          I’ve uploaded a draft of the paper at my academia.edu site. I’m always happy to provide pdfs of anything I’ve written to anyone!

          The question about the Mischel-style set up (which is the focus of the paper you mention) is a good one. One thing that Andy Clark has tried to do in his work on the PP framework is to highlight the role of bodily response in minimising prediction error. As well as minimising error by changing our model of the world, we can change the world. Of course, that’s not the scenario here. But highlighting the role of the body allows us to see that prediction error need not be stable across time, even when inputs don’t change. As we change the focus of our attention – which is crucial to the set-up, as I see it – we should expect alterations in predictions (attention just is a mechanism for increasing precision of predictions). Equally, we may entertain different high-level models, in the light of which error may decrease or increase.

          Obviously, that’s a bit hand-wavy – I don’t have a fully worked out model. That’s the direction I would go in to build one.

  2. Julia, Agnes, and Zina: We are all in one way or another engaging with dual-process theory. I generally dislike the Evans/Stanovich paired features approach, and here I agree with Zina. But there are other ways of making the core distinction. The approach I favor draws on the cognitive control research program, and especially computational models of conflict tasks such as the Stroop and Go/No Go tasks. Here in a nutshell is what I take to be going on in the Stroop task: Using working memory-dependent executive processes, the person intentionally regulates a habitual, prepotent word reading response and thereby performs the situationally appropriate ink naming response.

    @Zina: What is your take on my invoking two distinct processes to explain Stroop performance – is this a dual-process model you might be ok with?

    @Agnes: You are skeptical of the notion of stimulus-driven habitual responding in humans (and have done fascinating studies to back up the claim). But I take it you would be fine with my description of the Stroop b/c it is not proposed as an example of habitual *overt behavioral* responding. It is instead a case where habitual processes internally compete with goal-directed processes, and overt responding is typically non-habitual (people produce the ink color response more than 95% of the time). Is that right?

    @Julia: Your notion of accuracy-based arbitration is somewhat related to my notion of EVC-based decisions to exercise cognitive control. There could be a key difference though: I take exercises of cognitive control to be intentional – the person volitionally performs control actions. Is accuracy-based arbitration supposed to be associated with volition?

    • For my part: I’d be OK with this model of Stroop performance because it is task-specific, its partitions aren’t reified, and it doesn’t seem to depend on artificial dichotomization. But I’m not sure how far dual-process models like this will get us with weakness of will. For one thing, I worry that saying that weakness of will occurs when an agent decides not to exercise executive control is more of a restatement of the phenomenon than an explanation of it. For another, the Stroop example illustrates one of the problems I raised above, which is that we can have failures (or perhaps “non-exercises”) of executive control without weakness of will. I take it that someone who accidentally gives the wrong response on the Stroop task wouldn’t be weak-willed. We need some other criterion to pick out genuine cases of weakness of will.

      I also think that we can talk about executive control without conceiving of it acting over Judgment and Appetite. Do you agree, Chandra, that executive control could be part of the story without any kind of mental partitioning?

      • Zina, the model of Stroop performance I set out – the one that posits an executive process that interacts with an automatic process eventually giving rise to the overt response — isn’t intended to be task specific. There is excellent evidence (computational modeling, psychometrics, neuroimaging, manipulations…) that performance in an extraordinary range of cognitive tasks fits this pattern, and the executive processing at issue is shared, or is importantly overlapping, across these tasks. I see this as a central insight of the cognitive control research program.

        At some point, the question when it is ok to invoke “partitions” becomes semantic. The debate should be about the substantive character of the explanations of task performance across domains, and here I favor a broadly dual-process explanatory approach…

        • Zina

          Some tasks might fit that pattern, but I’m not convinced that that the evidence shows it’s as general as you suggest. I think we frequently see such a pattern because it’s natural for us to think in terms of (mental) binaries, as I mentioned above. We shouldn’t be misled by that tendency into believing that all processing has a common structure. Of course, executive control isn’t a task-specific notion, but we can talk about it without committing to the dual-process explanatory approach.

    • Julia Haas

      Thanks for kicking off the comments, @Chandra!

      On my view, accuracy-based arbitration is more akin to a kind of mechanistic allocation than it is to a robust notion of executive control or choice. Consequently, my view doesn’t leave much room for the kind of rich sense of intention or volition that I think philosophers of action are sometimes after. That said, I think accuracy-based arbitration is perfectly consistent with having the feeling of intentional action or volition, and my bet is that that’s probably what’s going on: while decision processes are mechanistic, we nonetheless experience them as involving something more agential.

      I think this last point squares well with Neil’s proposed account and how it relates to addiction. But he may not agree with me on this last point…

      • Neil Levy

        I do agree, Julia. But then I have weird views about mental action: I think the only thing we do that is fully agential, in the sense philosophers of agency have in mind, is attend. Everything else is pretty automatic. Actually, I think the commonalities between our views go deeper. Your account (as I’m sure you recognise) can easily be put in predictive processing terms. Of course that doesn’t make my view a terminological variant of yours: there are substantive claims you make that I don’t. As far as I can see, though, I could accept them. I am committed to there being discrete mechanisms that update appropriately. Were I to accept your view, I would be committed to some further facts about the computations these mechanisms run, and certain commonalities and differences between them.

    • Hi all, sorry for chipping in so late.
      @Chandra: The empirical research on executive control can be interpreted as evidence for the existence of a faculty of the will, but I don’t think any of this evidence forces us to accept such a faculty. The Stroop task can be explained as a competition between 2 goal-directed processes, with one that is at the service of the chronic goal to read the words, and the other at the service of the temporary goal to follow the task instructions and name the colors. People do manage to execute the task in the majority of trials, even if they are more delayed on some than on others, but this could simply be explained by the fact that the goal to name the colors has a higher value for the duration of the experiment? I don’t see why a third actor would be necessary to arbitrate. I am not saying that there cannot be a meta-goals. A participant could, for instance, notice that there is a conflict between the 2 two earlier mentioned goals, and this could give rise to the formation of the meta-goal to solve the conflict, but (a) it is not necessary to posit such a meta-goal to explain the data (and not doing so would be more parsimonious and would help you avoid homunculus and infinite regresss problems), and (b) even if some participants would pursue such a meta-goal, this is not yet another faculty, it is just another goal with another content.

      • MICHAEL TINTNER

        ” I don’t see why a third actor would be necessary to arbitrate.”

        You’d omit the *conscious self* from your account of decisionmaking? In all the complex “strong-weak will” decisions being considered, the vast majority of humanity would recognize that there is a conscious self taking decisions. (You don’t have one?)

        Biology tells us that the conscious self is subject to the continuously conflicting urges of the sympathetic vs the parasympathetic nervous systems – and the associated upper body vs the lower body – which they direct and which are dominant respectively in more active vs more passive activity. The conflict between activity and passivity, which takes many forms and which this “weak will” debate is almost entirely about, is written into the very structure of the human and animal body.

        The conscious self makes perfect sense as “the third actor” in your term, that chooses between these other two “actors” or, rather, subsystems. It clearly has more or less continuously to decide which way it is going to go – how it is going to decide between the conflicting urges of upper and lower body. Is it going to “pull yourself together”, “get a grip on yourself”, “hold your head up”, “show some backbone” – siding with the active, upper body? Or it going to “let yourself go,” “fall apart”, “lie back”, “relax” – siding with the passive lower body? (“Beneath is all the fiend’s” pace Shaks.) Is it in the process going to opt for more active or more passive activities?

        The conscious self is the driver of the human body, and it is continually being urged by conflicting bodily systems to accelerate/decelerate. Drivers are extremely technologically useful to vehicles.There isn’t a single machine ever invented that can function without one – without a human conscious self to drive it – or that classifies as anything more than a vehicle for a human driver. Computers don’t function without a human computer operator. Even so-called “autonomous” cars don’t work without a human conscious self to drive them.

        What is weird is Chandra’s attempt, so it seems, to reduce the conscious self to a “faculty of will.” That is very confusing. I wouldn’t know where to look for such a faculty. But clearly there is a conscious self that is directing a continuous inner stream of debate and continuously deciding what “I” should do – “willing” one way or another, choosing to be more or less active, and configure its body in one direction or the other. I think that is what Chandra is really referring to, but she may care to comment.

        • “You’d omit the *conscious self* from your account of decision-making?”

          I don’t take the conscious self to be a separate entity that is in the driver seat indeed (see Bechtel, 2008, for a persuasive account). The assumption/hypothesis is that each of the steps in the mechanisms that I described in my contribution can take place unconsciously (i.e., detecting a discrepancy between stimuli and goals, activation of the goal to reduce the discrepancy, weighing and selecting of action options as in assimilation, activating an action tendency, action execution, selecting new goals as in accommodation, and reinterpreting the stimulus as in immunization). That said, the steps can become conscious provided that the quality of the representations (with quality being a combination of intensity and distinctiveness) involved in these steps surpasses a certain threshold (see Moors, 2016). The higher the quality of a representation that forms the input of a process, the higher the likelihood that the process will take place and the higher the likelihood that the process (or the content of the representations involved in the process) will become conscious. But a process can also take place when the representations are weak and unconscious provided that there is no (or only very weak) competition of other processes.

          Bechtel, W. (2008). Mental mechanisms: Philosophical perspectives on cognitive neuroscience. Taylor & Francis.
          Moors, A. (2016). Automaticity: Componential, causal, and mechanistic explanations. Annual Review of Psychology, 67, 263-287.

          • MICHAEL TINTNER

            What evidence do you have that the conscious self is not in the driver seat?

            1. Science has never studied the conscious self – i.e the conscious self’s inner streams of debate. (You don’t disagree that they are there and central to the decisionmaking we are discussing? Or do you think that the simply massive literature about them in drama, humanities and religions and the media is deluded about their existence?) When science began it effectively ceded the conscious self and its debates to Shakespeare ( who developed the inner monologue) and a million other dramatists. After 90-odd % of its history, science finally “discovered” consciousness in the 1990s – a failure characterised by Watson as “ridiculous”. It has still failed to discover the conscious self in action – a failure which is not only ridiculous but scientifically obscene – like studying Apple and ignoring Jobs and Tim Cook – and will soon also be recognized as absurd. Certainly it provides zero evidential basis for making any scientific claims about the conscious self’s role in decisionmaking.

            2) There is no machine equivalent of the conscious self and mind – absolutely nothing on which to base the computational models you and others rely on. All those machines have a unicameral not a bicameral mind, and can’t function without a human conscious self driving them. They don’t have inner debates, emotions, conflicts. They don’t have to struggle to decide and do things. They’re reasonably useful paradigms for the unconscious mind, wh. also has no debates, emotions, conflicts – but obviously structurally, architecturally absurd as comparisons for the conscious self and mind. They also don’t have problems with concentration or application. In fact, computers as we know them cannot explain *any* of the central defining features of the conscious mind.

            3. The advantage of a driver is that they can steer the vehicle in any of infinite directions, including new directions. A driver is creative, with the capacity to strike out in new directions, a capacity that human beings never cease to use throughout their lives. We are continually exploring the world physically and mentally, continually striking out on new paths.

            There is not a single machine in existence that can take a single new step along a single new path. They are all going round and round the same old tracks. That’s why there are no robots freely roving the world – as we do.

            IOW maybe it’s worth rethinking the conscious self and mind and its “will” or willing. You and cog sci are comparing it to “dumb machines”, rational machines which do what they’re told, and follow rules and scripts, and can only do old routine things. The conscious self and mind are the smart, creative machine, which operates without rules or scripts, and works out what to do – tries to find out “rules (or principles) for living” – on the hoof – as all formally recognized creatives, like artists and scientists, demonstrably do.

            You claim, as I pointed out in an earlier post, that we are rational and by implication, rule-following, and normally follow “optimal” behaviours. If you actually try and itemise a single, rational “rule for living” – a single rule of work vs play, or any of the other areas under consideration here – you’ll find on examination that it falls apart and is absurd and in fact impossible. It’s easy to talk in a v. general way about “rationality”/”optimality” here. When you examine actual decisions faced in any detail, you’ll find as I argued earlier, that they’re all actually creative dilemmas and not rational at all. Can you tell me a “rational” way to invest our money on the stock market? No, and nor is there a “rational” way to invest our time and energies in the similarly risky and costly activities of life.

          • I do not argue that we have Olympic rationality either in the practical sense (which means that we take optimal decisions, i.e., decisions that satisfy our goals maximally) or in the theoretical sense (which means that our beliefs are maximally accurate), but I would argue that we often strive to be rational at least in the practical sense, we strive to maximally satisfy our goals, or to find an optimal balance between the satisfaction of our most important goals. Our rationality is obviously constrained by lack of information, computational capacity, opportunity etc. (Moors, 2017b). I understand the infinite creativity you talk about as the power that humans have to generate new strategies or subgoals to achieve their fundamental goals (as part of the broad strategy of accommodation). There is no reason to think this does not have a place in a mechanistic model. Mechanism does not equal dumb.
            But how do you handle the homunculus problem and the problem of infinite regress (see e.g., Railton, 2017)?

            Railton, P. (2017). At the core of our capacity to act for a reason: The affective system and evaluative model-based learning and control. Emotion Review, 9(4), 335-342.

          • MICHAEL TINTNER

            There is no “optimal balance” – no “right balance of work and play.” The idea is propaganda without substance. You will never specify an optimal balance or rule of life balance. Try it.

            For argument’s sake, let’s say you have a conflict now. Should you form an answer to this post, which may involve some strenuous effort, or should you stop to have a sexual/romantic daydream, or should you surf the internet idly?

            What is the right, optimally balanced thing to do? What rule will determine your choice, will decide how you should distribute your time between theorising, fantasy and information browsing? decide how many minutes if any you should devote to them in the next hour? or even decide whether it’s the next hour or 90/2/3 minutes, or what?

            And bear in mind that this rule must apply tomorrow, and tomorrow and tomorrow, for the rest of your life, or until such time as another rule (wh. you will also have to specify I’m afraid) allows for rule-changing?

            And you’ll have to explain how that rule can deal with all the unknowns of real world existence that are more or less likely to interfere – like a call from your loved one, a colleague barging in, a fire in the building etc. etc. ad infinitum.

            All of this is hard, in fact impossible. There are no rules that can either predict or prescribe for unknown unknowns. Only God and Donald Rumsfeld can do that.

            Like I said, rules for living and optimality (of any kind, however qualified) sound fine in theory, but are impossible in practice. That’s why all those machines you rely on for models can only operate in artificial, structured, protected environments, safe from the unknowns of the real world with which we living creatures have to deal.

          • Realizing that there are boundaries does not prevent us from striving for optimality or satisfaction of as many goals as possible, even if satifying the few goals that happen to be in our current repertoire or on our radar is the best we will be able to do. Nobody said we need a rule (explicit or implicit), or a single rule forever and ever.

      • Agnes, in my view, Stroop performance needs to be seen as richer than just two goals competing. The key thing about the Stroop is that people engage in specific *mental actions* (central attention actions, motor inhibition actions) to suppress the prepotent word reading response. The role of mental action here is crucial. In a simple value-based decision (e.g., which of two fruits do you want?), evidence accumulates for each option until one reaches a boundary. That is a good model of regular competition between goals/values. What happens in the Stroop and other conflict tasks is quite different, because the person is performing context appropriate skilled mental actions to bias the evolution of the evidence accumulation process in favor of the response favored by the ink color naming goal. Based on this observation, and many others, the best explanation for Stroop performance invokes a distinct cognitive control system with segregated value representations as the source of the top-down control actions.

        • “The key thing about the Stroop is that people engage in specific *mental actions* (central attention actions, motor inhibition actions) to suppress the prepotent word reading response.”

          Participants could engage in such mental actions, but the point is that if they do, these mental actions are at the service of the meta-goal to solve the conflict between the goal to name the word and the goal to name the color. A meta-goal is just another goal with another content (i.e., it refers to one or more other goals). The mechanism for reaching a meta-goal is not different from the mechanism to reach a non-meta-goal: detection of a discrepancy leads to the goal to reduce the discrepancy, which in turn, may lead to the selection of a specific mental action that has a high expectancy of leading to the desired outcome (i.e., the resolution of the conflict). In sum, the notion of mental actions does not, in my view, require positing a separate faculty with a separate mechanism such as the will, but can more parsimoniously be accommodated by positing meta-goals that require the same mechanism as other goals for their fulfillment.

          • Agnes, there is a lot of structure in the Stroop task and other classic conflict tasks that I feel gets lost when we talk about a generic picture in which “two goals compete”, even if the picture is enriched with higher order goals. Here are just two:

            1. The word reading response tendency arises much faster then the ink color naming response, which we know from fast error effects and other data from drift diffusion models.

            2. the word reading response tendency continues to be activated trial after trial, even though the person “knows” it is contextually inappropriate — two hundred trials later, it still shows up and leads to characteristic patterns of errors. Ink color naming is different: If you tell the person to stop saying the ink color and instead do something else, for example say the first letter of the word stimulus, they will shift immediately (and won’t make ink color naming errors).

            The cognitive control theorist’s explanation for both of the preceding effects is that the ink color naming response depends on working memory-dependent cognitive control processes which are slower but much more flexible, while the word reading response does not. On the “generic goal competition” picture, how are effects like these explained?

          • Thanks for the stimulating comments. A few replies.

            1. First of all, I mentioned the possibility of the meta-goal to solve a conflict between two other goals as the basis for mental actions, but there is also an alternative (more parsimonious) solution. The goal to engage in mental actions can itself be considered as a subgoal that is at the service of the goal to perform well on the task and follow the instructions, which has to compete with the chronic goal to read words. In this scenario, there are not 2 goals and 1 meta-goal (as in the previous scenario that I mentioned). There are simply 2 higher-order goals, and the goal to engage in mental actions (e.g., redirecting attention) is just a subgoal or strategy to reach one of them.

            2. Now a reply to your comment that a separate executive controller or faculty of will (which obeys to its own laws) is required to explain certain findings in Stroop and other executive control tasks:

            I define control in a minimalist sense as comprised of three ingredients: (a) the presence of a goal about a process or action (e.g., a promoting goal or a counteracting goal), (b) the presence of the state represented in the goal (e.g., the process occurs or the process is counteracted), and (c) the causal relation between “a” and “b” (see Moors & De Houwer, 2006a).
            If the goal to engage in a mental action is activated, and this goal causes achievement of the state represented in the goal, control is a fact. If the goal to engage in a mental action is activated, but the state represented in the goal is not (or only partially) achieved, control is lacking (or less than perfect).

            The goal-directed model only specifies the factors that determine the *choice* of the goal to engage in an action option: the presence of an action option in the action repertoire and the subjective likelihood (expectancy) that the action will lead to a valued outcome. It does NOT specify the factors that determine (a) whether this goal is also *achieved* (whether the action is succesfully executed) and (b) whether achievement of this goal leads to the valued outcome. In other words, it does not specify when control (on different levels) is a fact.

            If a person has the superordinate goal to do well on the task and detects a discrepancy with this goal (she makes mistakes), she may select the mental action option to ignore the words, which activates the subordinate goal (or action tendency) to ignore the words. Whether or not she successfully ignores the words and whether successfully ignoring the words leads to good performance is determined by its own factors, such as the capacity of the person to ignore words (tied to working memory capacity), environmental factors that make it more or less easy to ignore the words (e.g., the font in which the words are written, whether the words are spatially in the same location as the target colors, etc).

            The empirical data that you cite reflect task performance, and task performance not only depends on the action option chosen by the participant but also on the factors involved in successful execution of the action and the objective likelihood of reaching the superordinate goal that the goal to engage in the action is supposed to fulfill.

            3. You set apart the executive control system as one that is flexible but slow. It is this type of alignment (also discussed by Zina) that I want to avoid. The speed of certain processes (or the time they require to operate) depends on other factors such as repetition, stimulus intensity, the amount of attention available, etc. (see the compensatory view of automaticity in Moors, 2016).

  3. Neil Levy

    Now a question for Chandra. In your view, the will is independent of both judgment and appetite. But it makes decisions. How? A decision is a weighing of reasons. But judgment has all the reasons, and the will is independent of it. So the will can’t in fact make a decision. Rather, the “decision” seems to consist in nothing but an arbitrary plumping for one option or another. In other words, I worry that your view is really this: we have a faculty of appetite and one of judgment. Sometimes one wins and sometimes the other, and nothing explains who wins when.

  4. @Neil: Forming a practical judgment, in my view, is a process of information gathering that involves intentionally bringing to consciousness a few scenarios and prospects and trying to make assessments of their value. Practical judgment is certainly useful, but like any information gathering process, it has its limitations and patterns of failure. I deny that it contains all our reasons, as you suggest. Our minds are equipped with other ways of tracking reasons, many of which rely of sophisticated iterative statistical routines that operate mostly outside of awareness.

    I understand decisions to be processes of evidence accumulation, along the lines set out in drift diffusion computational models. More specifically, these models provide an account of the underlying subpersonal processing that terminates in the making of a decision. To elaborate a bit, when the evidence for an option accumulates sufficiently so that it crosses a threshold, a person-level action ensues: The person makes a decision to do something and intention formation immediately ensues. If decision processes, understood along drift diffusion lines, have multiple inputs, with practical reasoning being just one, then a person can decide to something that contravenes an occurrent practical judgment. Deciding in this way isn’t arbitrary plumping – it is doing something for a reason!

    • Neil Levy

      The model makes sense, Chandra, but now I worry that it really doesn’t look like it maps well onto anything we’re pretheoretically disposed to call “judgment” and “will”. You seem to say that there are in fact two judgment systems. One is the one that we for some reason (phenomenology?) identify with judgment; another isn’t. Why call that one will? It’s simply a rival system, that may or not may win a competition in a winner-take-all architecture.

      • The decision component I am drawing attention to that is independent of practical judgment isn’t one rival in a competition – that is the wrong way to think of it. It is rather the *substrate* in which the competition occurs and a winner crowned. On the picture I am putting forward, practical judgment and other sources of evaluative information enter as inputs into decisional processes that, via the evolution of evidence accumulation, terminate in the making of a decision and subsequent intention formation. The picture fits nicely with commonsense, which recognizes a clear distinction between deciding *that* something is best (practical judgment) and deciding *to* do it (decision proper). The twist I am adding is to draw from the research on cognitive control and drift diffusion models of decision to say that this distinction between practical judgment and decision is also best supported by the empirical evidence.

  5. Zina

    I had a few clarificatory questions for @Neil about his account of judgment shift, mostly related to the “model updating” process. Could you say more about how this works? Isn’t there only error – i.e. mismatch between cue and reward – once the reward has passed? So how/why does the system update its model of the world prior to consumption or non-consumption of the reward (in cases of one-off weakness of will)? (Apologies if these are silly questions — I’m not familiar with the judgment shift account of addiction!)

  6. MICHAEL TINTNER

    The puzzle can be fairly simply resolved, and rests on AFAIK universally false assumptions about the nature of the human machine and problems faced.

    The assumptions are that humans are rational agents/ animals/machines facing rational decisions which have rational, single “right”/”optimal”/”maximal value/utility” solutions. “Rational” means “lawful, formulaically scripted”/ algorithmically programmed like all machines to date, and all logical & mathematical systems. The puzzle then arises because “weak will” decisions are deemed “irrational”. So solutions to the puzzle have to be sought in some kind of bug or flaw in the human system producing the irrationality. In general, (analytical) philosophy and science accept these rational assumptions.

    The truth is that humans are primarily creative agents/machines facing creative decisions which have creative, potentially infinite solutions, where none are “right” and many are more or less equally attractive. In addition, there are potentially infinite methods for thinking about these problems, and infinite criteria which can be applled to them. The decisions faced therefore are dilemmas or multilemmas. The dilemmas are heightened because they are real world decisions by contrast with rational, logical/mathematical/algorithmic decisions which are artificial world decisions – and in the real, unlike the artificial world it is impossible to assess the rewards, risks and costs of these decisions other than uncertainly. The dilemmas are further heightened because as creative systems, humans are also *necessarily* conflicted emotionally and physically about what to do (whereas rational systems are unemotional) – and have more or less painful/pleasurable/satisfying feelings about the options, which in turn become factors in the decisionmaking. All this is the stuff of “the human drama”, which is continuous throughout our lives, and is more or less entirely (and madly) missing from philosophy and science – but rightly central to the dramatic arts, humanities and religions.

    Faced with these dilemmas, both “strong will” and “weak will” options are often more or less equally reasonable – seeming to be equally profitable, when the rewards, risks and costs are considered. And, as the humanities and religions effectively tell us, there is more or less no area of our lives, where humans do not, as a result, oscillate all their lives between now “strong”, now “weak” will decisions – between opting for now industry, now idleness, now production, now consumption, now asceticism, now sensuality, now altruism , now egocentricity etc. – opting for now this, now that side of the dilemmas. There are always powerful reasons for both sides.

    Let’s look at the decisions in more detail. First, formally. Philosophy and science would like to think that they are rational, and therefore merely more sophisticated versions of classic mathematical/computational problems, like, say, 234 x 432?, where there is one right solution, one right *collocation* of numbers – and you are rule-bound to arrange the same old basic set of variables in the same old ways.

    In fact, to continue formally, they are creative problems, “arts” problems, like “Put together a *collage* of 234 (and possibly other) numbers”, where the custom is to arrange ever new/old mixes of variables in ever new/old ways. Each collage will be different as well as similar to the last. In collages, there are no rules about what are the basic elements or numbers, how they are drawn, arranged or structured, and which new numbers or elements (like a Roman or other system numeral? or a face?) are or are not included. In fact, there are potentially infinite possibilities for every part of the problem.

    The typical “strong/weak will” problems cited by philosophers are also creative problems – involving choices between activity/passivity, production/consumption, like: “should I study or watch Netflix?” These are not rational problems that have neat solutions. There are potentially infinite ways you can distribute your time, efforts and faculties between the two activities, including switching to altogether different and even new activities, like cleaning up and/or organizing your study, and even finding a Netflix program that may have some study value/relevance instead. The choice is in general a dilemma between a higher reward-risk-cost option and a lower reward-risk-cost option. Studying/activity may be successful and make you feel more satisfied in the end, but it may also in the end be more or less of a failutre, and involve a good deal of effort and discomfort. Watching Netflix may result in more or less assured immediate pleasure or at least relaxation, with no effort or discomfort, but the pleasure may well recede, and in the end it may produce guilt and dissatisfaction that you haven’t studied and achieved more this evening. There are pros and cons on both sides. Yes, in general, a person may often think/philosophise that they should definitely be more active and productive. But at particular moments, especially moments of decision when the discomfort and pain of work are more apparent and felt more keenly, the same person may think differently, and wonder whether they wouldn’t be better off leaving work till tomorrow, for all kinds of reasons. After all, they may think, we need to play as well as work, relax as well as be tense and serious, and so on.

    If we’re rational agents/animals/machines, there is indeed a philosophical puzzle. If we’re mainly creative agents, (and we are), there is no puzzle.

    P.S. A crucial factor here is that we have no systematic culture of creativity whatsoever, only of rationality. But that can, and will, soon be remedied.

  7. @Julia: You wrote that Schroeder (2004) equates desire with reward and value, but that you equate desire with the intrinsic reward value of a given stimulus and value with the total expected future reward value associated with a given state.
    If I understood this correctly, this means that your definition of value includes expectancy. I think it is useful to separate value and expectancy. In expectancy-value theories, the expected utility (which we can also call the expected value) of a response option is determined by an estimation of the value of the outcome of the response (Ov) multiplied by the expectancy that the response will lead to this outcome (R-O expectancy). The value of the outcome corresponds to ‘desire’; the R-O expectancy is a type of ‘belief’ (even if a very thin notion of belief). Crucial in the above is that the expected utility/value of a response is distinguished from the value of an outcome. The value of a response equals the product of the value of the outcome and the R-O expectancy.

    • Julia Haas

      @Agnes: Thanks for your question! Yes, I define value as the total, expected, future reward associated with a given state, i.e., such that value roughly just is expected reward. I like this definition because I take it to be pretty consistent with the computational models and at least some of the empirical evidence, making it something of an empirical claim about how the mind attributes and assesses value in the world. Can you clarify a bit more as to why you think it is better to separate out value and expectancy?

      • @Julia: The value of an O can be manipulated independently of the R-O expectancy, that is all I wanted to say. I can expect to receive a reward of 10 euro with a subjective likelihood of 80% for instance. If we use the terms value and expected value interchangeably, we run the risk of overlooking the importance of expectancy – as is the case in several lines of research that claim to have a goal-directed approach.

        • Julia Haas

          @Agnes, I’m sorry, I missed your last responses! Yes, I agree, expectancy can get overlooked. I think it depends on how much you can rely on the definitions.

  8. @Julia, @Zina: I agree with Zina that dual (or multiple) process models fall prey to the alignment or mapping problem (Moors & De Houwer, 2006b; Moors, 2014; see also Keren & Schul, 2009; Melnikoff & Bargh, 2018a, 2018b). In previous writings, I have argued that dichotomies (or polychotomies) can best be understood and distinguished if we place them in a mechanistic, levels-of-analysis framework in which mental processes are mechanisms in which representations are operated on by operations under certain conditions (see e.g., Bechtel, 2008; Craver & Tabery, 2015). Within such a framework, dichtotomies (or polychotomies) can be based on (a) the content of representations (e.g., stimulus-driven/goal-directed), (b) the format of representations (e.g., verbal-like/image-like), (c) the types of operations that operate on the representations (e.g., associative/rule-based), (d) the conditions under which processes operate (e.g., automatic/nonautomatic), and (e) the routes for installing the representations and/or operations (e.g., hardwired/learned via overtrained operant conditioning/learned via moderate operant conditioning). As also pointed out by Zina, traditional dual (or multiple) process models tend to map the various dichotomies (or polychotomies) onto each other, so that they qualify as dual (or multiple) system models. I also agree that several of the dichotomies are not binary but gradual (e.g., automatic/nonautomatic) and that some dichotomies are even elusive (e.g., associative/rule-based). This complicates attempts to examine the empirical status of these mappings.

    @ Julia: Would you agree that when you characterize the Pavlovian system as hardwired, the habitual system as model-free, and the deliberative system as model-based, you fall prey to this alignment problem? And if so, what are the implications for your typology of weaknesses of will?

    Let me explain this in a bit more detail:
    1. The characterization of the habitual system as model-free is partly in line with the common mapping between the habitual/goal-directed dichotomy to the model-free/model-based dichotomy, but recent voices argue against such a mapping. For instance, in a recent review, Miller, Ludvig Pezzulo, and Shentav (2019) mention several arguments for treating model-based and model-free as subclasses of a goal-directed system, separated from the habitual system. Note, however, that while Miller et al. (2019) try to avoid mapping some dichotomies (e.g., they separate habitual/goal-directed from model-based/model-free), they again fall prey to mapping others (e.g., they map habitual/goal-directed onto automatic/nonautomatic).

    2. The characterization of the deliberative system as model-based smells like the mapping of the non-automatic/automatic dichotomy (if deliberate is understood as consciously intentional) onto the model-free/model-based dichotomy. Only if we resist such an a priori mapping does it make sense to examine whether model-based processing requires the conscious intention to do engage in this processing.

    3. The characterization of the Pavlovian system as hard-wired seems a bit strange to me, given that the term Pavlovian is generally considered to be a learning procedure. First, I would not speak of a Pavlovian ‘system’, to avoid a priori aligment between a Pavlovian learning procedure (in which a stimulus S1 that does not initially elicit a response is paired with a stimulus S2 that does already elicit a response R2) leading to a Pavlovian conditioning effect (i.e., S1 alone presentations lead to a response R1 that is similar, anticipatory, opponent to, or just different from R2) to a single underlying mechanism. Second, there is fair consensus that a Pavlovian learning procedure installs a [S1–S2] relation, but this relation is not in itself sufficient to cause R1, and there is no consensus about the mechanism that does cause R1 (see Moors, 2017b, for a list of options). In other words, the Pavlovian ‘system’ does not by itself provide a mechanism for behavior causation. It is often implicitly assumed (but not empirically verified) that S2 elicits R2 via an innate S-R link, but it could also be mediated by an S:R-O link. If S1 elicits R1 via its link with S2, R1 could thus also be elicited either by an S-R link or an S:R-O link.

    Keren, G., & Schul, Y. (2009). Two is not always better than one: A critical evaluation of two-system theories. Perspectives on Psychological Science, 4(6), 533-550.
    Melnikoff, D. E., & Bargh, J. A. (2018a). The insidious number two. Trends in Cognitive Sciences, 22(4), 668-669.
    Melnikoff, D. E., & Bargh, J. A. (2018b). The mythical number two. Trends in Cognitive Sciences, 22(4), 280-293.
    Miller, K. J., Ludvig, E. A., Pezzulo, G., & Shenhav, A. (2019). Re-aligning models of habitual and goal-directed decision-making. In R. Morris, A. Bornstein, & A. Shenhav (Eds.), Goal-directed decision making: Computations and neural circuits. Elsevier
    Moors, A. (2017b). The integrated theory of emotional behavior follows a radically goal-directed approach. Psychological Inquiry, 28, 68-75.

    • Julia Haas

      @Agnes: A couple of things to say in response to the mapping problem issue.
      1) I agree with both you and @Zina that a multi-system views is “is only as solid as the partitions it relies on.” There are also good reasons to worry that the partitions put forward by reinforcement learning approaches, i.e., the partitions I take on board in my view about weakness of will, will turn out not to ‘carve nature at her joints.’ I’m ok with that.
      2) That said, I think there is something to the idea that the process of assigning value; that there are certain formal methods for assigning value that are epistemically and practically accurate and efficient; and that there is some reason to hope that the mind adopts, approximates or mimics some of these accurate, efficient formal methods. This leads me to that, even if the current RL partitions turn out to be quite wrong, they are nonetheless on the right track, and will lead us to a better understanding of this aspect of the mind.
      3) I don’t think there is much in my view that is connected to automatic or non-automatic processing in the technical senses of those terms, so I don’t think I am subject to the mapping problem in the way that you characterize it in your papers (if I am understanding the papers correctly). Instead, as I mention in (2), each of the multiple systems are better described as informal characterizations of formal algorithms, with labels attached for ease of use. However, I do agree that as soon as one attaches philosophical labels to things, one is open to possible confusion, including here, and that a lot of my terminology overlaps with other uses in the area.

  9. Another one for @Julia: Would you agree that Davidson’s classic example lends itself to an alternative interpretation? Why would getting out of bed to brush one’s teeth have to be driven by the habitual system? I would be more inclined to see it as a case of two competing goal-directed processes, one at the service of the short-term hedonic goal to stay snug in your bed and the other at the service of the long-term goal to avoid teeth decay. And in this case, the latter process wins, and the behavior is labeled as strong-willed.

    • Julia Haas

      @Agnes: I take this be a terminological issue. To try to clarify, let me start with the terminology from reinforcement learning and work forward from there, i.e., let me start with the model based and model free systems. Model based learning refers to a summative algorithm; it is sometimes described as goal-directed learning; and for philosophical audiences, I often call model-based call it the deliberative system to try and make things clear. *But this last move is actually a bit misleading on my part when it comes to ‘goal-directedness,* because, as you point out, both model-based and model-free learning is instrumental, i.e., it pertains to some goal. What they mainly differ on is the representation of that goal.
      This means that I would say the following about the Davidson example: if Davidson gets out of bed, his action is certainly instrumental (and so perhaps ‘goal-directed’ in your sense), but not goal-directed in my sense, as I see this as likely a model-free action. Of course, it is possible that Davidson can explicitly represent the decision matrix and decide to get out of bed, but that doesn’t seem to fit with his description. So I think that in that case, getting up from bed has previously been cached as a positive state-action pair, and that is what Davidson does, even though, on balance, it’s not the best thing for him to do in that moment.

      • @ Julia: With goal-directed, I also mean that the valued outcome has to be represented and activated (together with the response-outcome expectancy). So it is not enough that it was represented or activated in the past. This means that I do not side with the terminology (used by some) to consider both goal-directed and habitual or stimulus-driven processes as instrumental. This being said, the representation of the valued outcome does not have to be conscious .
        If Davidson gets out of bed despite judging it is best to stay in bed, but duty gets the better of him, fulfilling one’s duty is a goal and can be represented – even if not consciously perhaps in this case.
        Thought experiments are tricky in that an author can claim to know what the content of a protagonist’s representations is. This luxury we don’t have in real-life cases and experiments …

        • Julia Haas

          @Agnes, yes, I think our disagreement is most terminological. I tend to go with modified versions of RL definitions, but I agree that this poses its own challenges!

Comments are closed.

Back to Top