Let me start with a bit more detail on the structure of the Extended Theory of Rationality (ETR). Suppose I am intentionally baking a cake. According to ETR, this action is an end that I am pursuing, and thus the principle of instrumental reasoning enjoins me to pursue sufficient means. The pursuits of various means for the sake of the end of baking a cake are thus manifestations of my instrumental rational powers. As anyone who has ever baked a cake can tell you, baking a cake is an action that takes time. The means to the end of baking a cake are also further extended actions. But baking a cake is also what I call a “gappy action”; not everything I do in the entire interval is a means for baking the cake. I might turn the oven and then stop do something else, then whip the eggs, stop to listen to the radio for a minute (assume throughout that my stand mixer, necessary for making the cake, drowns out the sound of the radio), and then measure the flour. The diagram of my baking the cake might look something like this:
Of course, the actions I take as means are themselves extended and they themselves could be gappy. Underneath our “Whipping eggs” cell, we could have “grasp the whisk”, “whisk the eggs”, “check the cat again” (shaded), “return to whisking”, and so forth.
The shaded cells represent the actions that are performed while I am baking the cake but not for the sake of baking the cake. However, they are also partially “controlled” by the end of baking the cake. I act irrationally if I perform an action during the gaps that is incompatible with my baking the cake; that’s why if you call me and ask me to help you move, I’ll say “sorry, I can’t; I am baking a cake”. For the same reason, I cannot listen to the radio for too long; if I do, the whipped eggs will turn to mush, or I will need to leave to go to work, or I’ll eventually die of old age. So how long can I listen the radio for? Well, my end of baking the cake is indeterminate in many ways: for instance, it is left undetermined how tasty it needs to be, or how soon it needs to be ready. It seems plausible that there is no exact moment such that both (i) if I continue listening to the radio for even one more millisecond there’ll be no acceptable completion of the cake, and (ii) if I otherwise stop then, I’ll be able to realize my end properly. In fact, if I enjoy listening to the radio, it might be that at any particular moment I prefer to keep listening to the radio rather than continue the baking of my bake. Given that going back to cooking a millisecond later will make no difference to my baking, it seems that I prefer to listen to radio for a millisecond more. Yet, if I keep this pattern going, I’ll end up not baking my cake. So, when should I stop listening to the radio?
This pattern is ubiquitous. Just to give another example, next time you are wasting time on twitter (or some other website) planning to get back to work soon, ask yourself “will it make a difference to my professional life if I read just one more tweet?”. The answer is invariably “no”. Yet, as we know all too well, we can easily waste the day online if we keep going.
It is tempting to say: “there must be a moment at which it is ideal to stop; the moment in which I’ll have done the maximum amount of radio listening without compromising my cake”, but I argue in the book that this is an illusion; the theory of instrumental rationality cannot pick out an exact point. In a nutshell, there are various points at which I have clearly left myself enough time to bake an acceptable cake and clearly did an acceptable amount of radio listening. If I stopped at any of these points I acted rationally, and if I stopped at any point at which I clearly could no longer bake an acceptable cake, or at which I cut off my radio listening clearly too soon, then I manifested irrationality. Since there is no such exact last moment, it would be a gratuitous demand of a theory of instrumental rationality to say that I must stop at a specific point. On the other hand, it would also be self-defeating if the theory said that I must keep listening to the radio as long as this is my most preferred alternative. Thus the principle of instrumental reasoning must issue:
- (a) permissions not to choose a most preferred alternative in order to pursue an indeterminate end.
- (b) requirements to exercise some of these permissions.
Anything more would be a demand to ask to pursue something beyond the sufficient means to my end; anything less would make it impossible to pursue indeterminate ends. Once we notice this general structure of the rational pursuit of extended indeterminate ends, a number of consequences follow. First is the NONSUPERVENIENCE THESIS mentioned in the last post:
The rationality of an agent through a time interval t1 to tn does not supervene on the rationality of the agent at each moment between t1 and tn.
Since there is no “last moment” in which I can exercise a permission to stop listening (given the indeterminate nature of my end of baking a cake), I could always keep failing to exercise these permissions until it’s clearly too late to bake a cake. At each momentary snapshot in the interval, I would have acted rationally, and yet I would not have acted rationally throughout the interval.
Next, we get a vindication of “satisficing”. In pursuing multiple indeterminate ends, the agent often must be guided by the pursuit of “enough” of any given end (enough money, enough professional success, a good enough cake, enough fun). Satisficing is a rational ideal for us, not because of our limited cognitive capacities, but because given the structure of indeterminate ends, maximizing is literally impossible. In our cake-baking vignette, there is no best combination of baking and listening to the radio; I could always listen to the radio for one more millisecond.
Finally, future-directed intentions turn out to be dispensable. What philosophers such as Bratman take to be characteristic of our planning agency, turn out to be a much more general feature of the pursuit of any action extended through time (and thus of the pursuit of any action). The rational requirements that supposedly apply specifically to future-directed intentions are either spurious or an immediate consequence of the principle of instrumental reasoning applied to extended agency.
At this point you might be tempted to say that this is all wrong-headed: “there must be a last moment at which I can stop listening to the radio without compromising my baking, and decision theory gets it right that I maximize utility (and thus act rationally) only if I stop at this point”. In the book, I argue against this thought by focusing on a particularly sharp instance of this general structure: Quinn’s puzzle of the self-torturer. The self-torturer (ST) is given the following series of choices: for $100,000, a weird scientist will permanently attach a device to ST’s body that gives her electric shocks of varying degrees of intensity. The machine has many settings corresponding to increasingly more powerful shocks. The settings move very gradually (but irreversibly): adjacent settings are (nearly) indistinguishable to ST, but very high settings deliver extremely intense pain. ST is paid 100,000 every time she moves up a setting. Whichever setting she’s in, ST seems to have compelling reason to move on to the next one; after all, she cannot (can barely) notice any difference in pain level, but she pockets an extra $100,000. But it cannot be rational for her to keep moving up the settings indefinitely. After all, at the higher settings, she would be in agony and would gladly return all her earnings (and probably pay much extra) to have the device removed. When should ST stop? For decision theory, there must be a last setting sn such that stopping at sn is permissible, but stopping after this point is not. I argue that this is an extremely implausible conclusion.
Although the argument is complex, the central problem is that decision theory cannot preserve a plausible constraint on any solution to the puzzle; what I call “nonsegmentation”. In a nutshell, nonsegmentation says that in a one-shot version of the puzzle, I must (or am at least permitted) to accept the money. Suppose that due to my back pain I am already at a pain level equivalent to sn. I am now offered $100,000 to be part of a study testing a cosmetic product that will move me to a pain level equivalent to sn+1. I cannot tell the difference between these two pain levels, and I was really looking forward to being able to afford a new kitchen renovation. It seems completely unwarranted to say that it would be irrational of me to accept the money, but this is what decision theory is committed to. On the other hand, ETR has no problem explaining why nonsegmentation holds. In the original puzzle, I can exercise the permission in (a) above, because, to use the language of the book, my end of a relatively pain-free life is implicated in the series of choices; however, the pursuit of money in the one-shot case does not encroach on the pursuit of the better anesthetized life.
So far I have been accentuating the positive for ETR. But arguably not all is so rosy for the theory. ETR seems so far to have no place for comparative attitudes, and thus, arguably, no place for acting under risk. But decision theory and other contemporary conceptions of instrumental reasoning shine in exactly these contexts. It is time to move the discussion to their home field. We’ll do this in our next installment.
 I am not one of them; I don’t even like cakes all that much.
 Or if there’s such a moment, I have no way of knowing it.
 Wait, how did preferences get in? According to ETR, preferences cannot be the basic given attitudes. But my ends my generate preference orderings. How? Well, you’ll need to wait for the next installment (sneak preview: the relevant preferences here will be instances of what I call “Pareto preferences”).
 And of course, there might be borderline cases in which it is not determined (knowable) whether I stopped at an acceptable point.
 Of course, I agree with philosophers such as Bratman that an account of rational agency that looks only into our desires and preferences will not do justice to extended agency; one cannot overestimate the importance of Bratman’s (and others’) work in pointing this out. However, I do not agree that this problem can be fixed by adding to the theory another narrow mental state such as intention.
 Are they then different pain levels? I am assuming they are, but we could make the same point in a more longwinded manner, by just focusing on the changes to the physical causes of the pain.
 Assuming same wealth level and no changes in preference ordering
 If you think that I’ve been playing tricks with vague predicates, I should first assure you that I have not. But I also argue in the book that you can create a very similar structure by relying on repeated gambles instead.