Motivation

: I develop a scheme for the explanation of rational action. I start from a scheme that may be attributed to Thomas Nagel in The Possibility of Altruism , and develop it step by step to arrive at a sharper and more accurate scheme. The development includes a progressive reﬁnement of the notion of motivation. I end by explaining the role of reasoning within the scheme.


The Possibility of Altruism
Thomas Nagel's short book The Possibility of Altruism was a turning point in moral philosophy. It was influential in many ways. For one, it put the notion of a reason at the centre of its argument, and it made reasons a central topic for moral philosophers. Previously, they had principally been a topic within the philosophy of action, and Nagel brought into moral philosophy some of the concerns of philosophers of action. 1 Philosophers of action were particularly interested in the notion of acting for a reason, and we shall see that this notion guides Nagel too.
Nagel's book aims to explain how altruism is possible. Altruism is rationally doing something for someone else's sake. A sparrow works all day long to feed its chicks, but that is not altruism, at least as Nagel means it. A sparrow acts out of instinct rather than from any rational faculty. The book aims to explain how it is possible for a person to act rationally for someone else's sake.
This is a psychological question; it is about how we can act. Nevertheless, Nagel's method is philosophical. He conducts an a priori investigation of what a rational person's psychology must be like. His work is within philosophical psychology. You might wonder how an a priori investigation of rationality can tell us about empirical psychology -about how people actually act. I shall return to this question in section 7.
1 For example, Davidson (1963). THEORIA, 2009THEORIA, , 75, 79-99 doi:10.1111THEORIA, /j.1755THEORIA, -2567THEORIA, .2009 Since Nagel is concerned with altruism, which is a moral motive, his work also falls within ethics. He goes so far as to say "I conceive ethics as a branch of psychology" (Nagel, 1970, p. 3). In the course of this paper, I shall make a case for the autonomy of ethics, separating it from psychology.
But my main purpose is to amend the details of Nagel's philosophical psychology. I shall not discuss altruism; that is not the subject of this paper. I shall concentrate on the question of how rational acts in general should be explained. More accurately, I shall consider what rational process can give rise to an act. Strictly, I do not attribute rationality to an act itself, but to the process that leads to it, but I shall nevertheless continue to use the term "rational act" for an act that results from a rational process.
Nagel also aims to describe the rational process that can give rise to an act. His description is founded on reasons. He claims that reasons rationally motivate us to act. I shall argue that we should think of rational motivations as arising, not from reasons, but from judgements about what we ought to do. I shall refine the appropriate notion of motivation, and in section 8 I shall suggest that we can motivate ourselves by the activity of reasoning.

Some Explanatory Schemes
Naively, it seems that the paradigm of a rational act is one that is done for a reason. What apparently separates rational creatures from the rest of the world is that they have reasons, and they do things for reasons. They also believe things for reasons, despise things for reasons, enjoy things for reasons and so on. Nagel examines specifically practical rationality, which is rationality of action. Acting for a reason seems the paradigm of practical rationality. An altruistic act is one that is done for an altruistic reason -for the reason that it will benefit someone else.
It is therefore natural for Nagel to conduct his argument in terms of reasons. When you act for a reason, the reason explains your act through a process that is rational. This gives us a very primitive scheme for explaining rational action:

Scheme A
Scheme A is the first of a sequence of schemes in this paper. Each will develop out of the previous one. As far as scheme E, I shall try to follow Nagel's own thinking in The Possibility of Altruism. Doing so does involve some points of interpretation, which I may have got wrong. I shall also have to cut through some complications. So I may not succeed in representing Nagel very accurately. But that does not really matter, because I am not really trying to expound his argument.
Instead I am exploiting it. I shall use it to lead the way towards what I believe to be a better scheme for explaining rational action. Only the final schemes, G and H, are ones I find satisfactory.
It will become clear along the way that I am not trying to find a universal scheme. There are different sorts of rational action. Each is governed by a different requirement of rationality, and each is explained in a different way. In section 7 I shall briefly describe instrumental rationality, which is one sort. But this paper is principally aimed at the rationality of acting for a reason. By the end of the paper, this will have evolved into the rationality of intending to do what you believe you ought to do. I call this "enkratic rationality". It is very different from instrumental rationality.
Each of my schemes is a framework for explaining rational action. It sets out some steps in the explanation, and leaves room for details to be filled in. In my diagrams, I use an arrow to mark an explanatory connection. According to scheme A, an act is explained by a reason. Details need to be filled in about the process by which that happens. If this is to be a scheme for rational action, the process has to be in some way a rational one.
To start filling in details, we can break the process shown in scheme A into two steps, by using the concept of a motivation. For now, I shall adopt a very broad notion of motivation: a motivation of yours to do a particular act is any disposition of yours to do the act. Motivations understood this way include desires, intentions, and other sorts of dispositions to act; we shall come to others in section 3. They also include dispositions that are defeasible, so that, even if you are motivated to do something, you may not actually do it. Most authors employ a narrower notion of motivation than this. Later I shall adopt a narrower one myself.
We get the scheme:

Scheme B
In this scheme, I have shown the connection between the reason and the motivation by a hook instead of an arrow. I use a hook to indicate that the connection is necessary. I put it there out of deference to Nagel, who embraces a view known as "internalism" about reasons. I shall take internalism to be the view that you cannot have a reason to do something unless you are motivated to do it. To put it another way: a reason entails a motivation. Though the connection between a reason and a motivation is necessary, it may nevertheless be explanatory. Indeed, Nagel thinks that, when you have a reason to do something, you are motivated to do it because of the reason (Nagel, 1970, p. 13). Still, the connection is different in type from the one between the motivation and the act. This latter connection is explanatory in a different way. It is causal and MOTIVATION 81 contingent, since your motivation may be defeated and you may not do what you are motivated to do. I use an arrow for it.
Next, an amendment. Scheme B does not correctly represent Nagel's opinion. Nagel does not think that a motivation to do something is entailed by a reason to do it; he thinks it is entailed by a judgement that you have a reason to do it. Nagel is what is these days called a "judgement-internalist". He thinks that you do not properly judge that you have a reason to do something unless you are motivated to do it. Other philosophers think that reasons themselves entail motivation, 2 but for Nagel it is judgements about reasons that do so. 3 Indeed, he often speaks of the "motivational content" of these judgements, which suggests that a motivation is part of their content. We get:

Scheme C
The "reason-judgement" in this scheme is the judgement that you have a reason to do the act. To fit into the scheme, you do not need to make a judgement about what the reason is.
I have put an arrow between the reason and the reason-judgement. For the sake of the diagram, I assume that when you judge that you have a reason to do something, that is because you do have one. In practice, you sometimes judge wrongly: you have a reason-judgement when you have no reason. In a case like that, the reason would drop off the left of the diagram. But the rest of the scheme for rational action would stay in place. A false reason-judgement can explain an action just as well as a true one can, and in the same rational manner. Consequently, the connection on the left will later turn out to be redundant.

Opposing Motivations
For almost any potential action of yours, you will have reasons to do it and reasons not to do it. Probably you will judge that you have reasons to do it and reasons not to do it. In the diagrams, when you judge you have a reason to F, I shall call your judgement a "reason-judgement to F". All your reason-judgements to F and your reason-judgements not to F, if you are rational, come together to determine whether you F or do not F. That suggests the scheme:

Scheme D
But how, precisely, does the coming together of these reason-judgements work? Each one entails a motivation. I have drawn the diagram with arrows coming from the motivations, so in the diagram it is the motivations that together explain your act of Fing. How would that work?
In earlier schemes it was natural to make the explanation of your act pass through your motivation. I assumed implicitly that you had just one motivation to do a particular act, and no contrary motivation. The one motivation is a disposition to do the act, which causes the act. We might think of it as an impulse pushing you towards the act. Since this impulse is not opposed, it causes you to do the act. Now we have conflicting motivations, we can still think of them as impulses, but now they push in opposite directions. We may suppose that each has a strength, and you will be moved to act one way or the other by whichever motivations have the greater aggregate strength. You do what you have a stronger motivation towards.
But this metaphor of impulses (which I take from David Hume, 1978, book 2, part 3, section 3) does not produce a plausible explanation of rational action. For rational action, a judgement is involved; when you judge that reasons are opposed, you need to make a judgement about what you ought to do. True, making this judgement may involve your weighing up opposing reasons. You may therefore take the reasons to have weights of a sort. But those are normative weights that help determine what you ought to do. They are not motivational strengths in the form of impulses that push you towards an act. In so far as scheme D suggests that motivations act as impulses, it is not a good scheme.
However, we do not have to think of motivations as impulses. I started by adopting the very broad definition of a motivation as any disposition to act. This allows for motivations that are not impulses. For instance, we might identify the motivation associated with a reason-judgement as the reason-judgement itself. A judgement that you have a reason to do something constitutes a sort of disposition to do that thing, at least if you are rational. When you judge that you have a reason to do something, if you are rational that judgement plays a part in determining your judgement about what you ought to do, which in turn determines what you do. That makes your reason-judgement a sort of disposition to act. It therefore fits my definition of a motivation. We might identify its motivational strength with what you judge the normative weight of the reason to be. MOTIVATION

83
Indeed, I think this is close to Nagel's own intention. Nagel defines what he calls "motivational content" this way: . . . I explained the sense in which . . . practical judgments possess motivational content; the acceptance of such a judgment is by itself sufficient to explain action or desire in accordance with it, although it is also compatible with the non-occurrence of such an action or desire. (Nagel, 1970, p. 109, original italics;see also p. 67) The motivational content of a judgement is evidently not a desire, since it is the feature of a judgement in virtue of which it can explain a desire. 4 Hume thinks of a desire as an impulse, but Nagel does not think of motivational content that way. For him, motivational content is a feature of a "practical judgement", by which he means what I mean by a "reason-judgement". Motivational content is the feature of a reason-judgement in virtue of which it can explain an act or a desire.
Moreover, Nagel seems to think of motivational content as normative weight. He speaks of motivational content as providing "justification for doing or wanting" (ibid., p. 65). "Justify" is a normative word. Furthermore, he says: . . . it should be possible to account for the motivational content of present tense judgments about prima facie reasons in terms of their capacity to support more conclusive judgements about sufficient reasons. (ibid., p. 66) The connection between prima facie reasons (which these days we generally call "pro tanto" reasons) and sufficient reasons is a normative one. The way in which your judgements about prima facie reasons can support your more conclusive judgements about sufficient reasons is through your judgements of the normative weights of the prima facie reasons. So this quotation seems to describe motivational content as normative weight.
That is consistent with my initial definition of "motivation". But I am now going to adopt a narrower notion of motivation, so as to separate motivations from judgements of normative weight. A narrower notion is more useful.
For one thing, it is helpful in explaining irrational action. Suppose you are offered some cake. You believe it to be delicious, and you believe that constitutes a reason to eat it. So you judge you have a reason to eat the cake. You also believe that eating the cake will damage your health, and you believe that constitutes a reason not to eat it. So you judge you have a reason not to eat the cake. Suppose you judge that the second reason has greater normative weight than the first, so you judge you ought not to eat the cake. But suppose that, irrationally, you eat it all the same. A way of explaining your action is to say that you are more strongly motivated by your belief that the cake is delicious than by your belief that it will damage your health. This implies that the motivational strength of your beliefs does not match your judgement of their normative weights. This is a natural way of explaining your action, and it depends on separating motivation from judgements of normative weight.
A motivation to F of this narrower sort is not just any disposition to F, but one that is further along the road towards action than a reason-judgement is. In the cake example, the motivations work as impulses; there are opposing motivations and the stronger one wins. But motivations of this narrower sort are not necessarily impulses. They may determine your action through rational process. For instance, a motivation to F may be an intention to F, and in section 7 I shall describe how intentions can determine action through rational processes. However, motivations of this sort are not normative judgements, and they do not determine your action through normative judgements.
In explaining rational action, the narrow notion of motivation gives us this more detailed, sharper scheme:

Scheme E
I said that rational action when reasons conflict calls for you to make a judgement about what you ought to do. This ought-judgement appears in scheme E. It entails a motivation, which in turn leads you to act. Your reason-judgements contribute to your ought-judgement, and their contribution is a matter of normative weight rather than of motivation. I have still assumed that motivations are associated with each of your individual reason-judgements, but those motivations are off the explanatory path of rational action.
I am now using "motivation" in a way that differs from Nagel's. Apart from that different terminology, scheme E differs from Nagel's views in only one significant respect. In the passage I quoted above, Nagel recognizes that reason-judgements (in his terms, judgements of prima facie reasons) contribute to rational action through a judgement of sufficient reason. In scheme E, they contribute through an oughtjudgement. That is the difference.
There is a reason why an ought-judgement rather than a judgement of sufficient reason appears in scheme E. If there were a judgement of sufficient reason in that place, it would have to entail a motivation. But that would lead to a problem. To judge you have sufficient reason to F is to judge that it is permissible for you to F -in other words, that it is not the case that you ought not to F. On some occasions you may judge that you have sufficient reason to F and also that you have sufficient reason not to F. These judgements may be rational and indeed true: it may truly be MOTIVATION 85 permissible for you to do something and also permissible for you not to do it. But if the judgements entail motivations, you will have a motivation to F and a motivation not to F. You will have opposing motivations once more, and once more there is the problem of how they are resolved.
I do not say this is an insoluble problem, but to avoid having to solve it, I have chosen to deal only with cases where you form a stronger ought-judgement rather than a weaker judgement of sufficient reason. This means that scheme E does not apply to all cases of rational action. That is not a problem. I do not aim for a universal scheme of explanation, because I recognize there is more than one pattern of rational action. Now I use "motivation" more narrowly, you might reasonably doubt that the connections shown by hooks in scheme E are really necessary. I do not insist they are. I leave hooks in schemes E and F, but only because it is convenient to postpone a discussion of internalism until section 6.

Autonomous Normativity
Scheme E shows you making an ought-judgement, which motivates you. For the explanation of rational action, it does not matter how you come by this judgement. Just for the sake of the diagram, I assumed you acquire it from your judgements about reasons. I assumed you judge whether or not you ought to F by weighing up what you judge to be the weights of your reasons for and against Fing. But in my next scheme I drop this assumption. This next scheme is a big departure from Nagel. It separates the normative from the motivational, which he deliberately does not do. 5 It is:

Scheme F
The top part of the diagram is the normative part; the lower is the explanatory, psychological part.
The top part is where it is determined what you ought to do and why you ought to do it. Here we can locate ethics as an autonomous discipline rather than a branch of psychology. More than just ethics goes into the top half too, because normativity is wider than ethics. For instance, self-interest is also included, because self-interest has an influence on what you ought to do. Indeed, what you ought to do for your own interest may be determined by the weighing of opposing reasons, in just the way my diagram shows. When you are offered a slice of cake, the damage it will do to your health is a reason against eating it, which weighs against its delicious taste -a reason for eating it. Both are reasons stemming from your own interest.
Since this part of the diagram is normative, the arrows that connect reasons to ought do not show causal connections. The connections are contingent and explanatory, even so.Your reasons to F and your reasons not to F together determine whether or not you ought to F. In the diagram, what you ought to do is determined by a balance of reasons, but I do not insist it is always determined that way. For instance, it might sometimes be determined directly by a deontic rule, such as the rule that you ought not to kill an innocent person. A rule like this would not be weighed against anything. How oughts are determined is not a subject for this paper; it is a question for substantive normative theory. The diagram simply illustrates one possible case.
The lower part of the diagram is the scheme for explaining rational action. This scheme starts from a judgement of yours about what you ought to do. I have deliberately not specified in the diagram where this judgement comes from. For one thing, I have not drawn any link to it from the top part of the diagram. You might arrive at your ought-judgement in various ways. For example, you might do it by deliberating in a way that mimics the normative determination of ought: you might weigh reasons in your own mind. For another example, you might be told by a friend what you ought to do. Your friend might not tell you why you ought to do it, but you might nevertheless believe what she tells you because you trust her. For the explanation of action, it does not matter how you arrive at your ought-judgement.
The judgement need not even be true. A false ought-judgement explains an action just as well as a true one does, and in just the same way. Indeed, the ought-judgement may not even be formed rationally. If it is not, you are not entirely rational. Nevertheless, the process that takes you from the judgement to an act may be rational. So the scheme is still a scheme for explaining rational action.

Acting for a Reason
Philosophers often make a contrast between normative reasons and motivating reasons. 6 The property of being a normative reason is the property of playing a 6 A leading example of a philosopher who insists on this distinction is Michael Smith. See Smith (1994). MOTIVATION 87 particular role in determining what you ought to do. The particular role needs to be spelt out, but I shall not spell it out in this paper. 7 The reasons shown in the top part of scheme F are normative reasons, since they help to determine what you ought to do.
The property of being a motivating reason is the property of playing a particular role in determining what you do. We might call it the role of motivating you. Since normative reasons appear in the top part of scheme F, you might expect motivating reasons to appear in the lower part. Why do they not?
Because they would add nothing useful. To be sure, motivating reasons exist. When you have a normative reason for doing an act, that reason may motivate you to do the act. If it does, the normative reason is also a motivating reason. You may also be motivated to do an act by something that is not a normative reason at all. This too may be a motivating reason. So we have uses for the term "motivating reason". In fact, we have too many uses. This term can refer to things of too many different sorts. Those things do not have enough in common to make the concept of a motivating reason useful analytically. It is too much of a ragbag. That is why it does not figure in my diagram.
The concept of a motivating reason arises from the concept of acting for a reason. When you act for a reason, that reason is a motivating reason. (There are other motivating reasons too. You may have a motivating reason even though you do not act for that reason, perhaps because it is overridden by an opposite one.) But unfortunately, our notion of acting for a reason is itself too broad and indefinite to be analytically useful in explaining rational action. The same therefore goes for the notion of a motivating reason.
Acting for a reason seems at first a paradigm of rational action. That is why I started out from it in scheme A. However, it is actually very far from a paradigm of rational action. When you act for a reason, you may not act rationally at all. Indeed the process by which you come to act may have little about it that is rational.
True, we would not say you act for a reason if you were not rational in a sense. You must have a rational faculty; you must be a rational creature. We normally attribute the property of acting for a reason only to rational creatures. However, it is not true that, necessarily, when you act for a reason, you act rationally.
For instance, suppose you eat a cake because you believe it to be delicious, although you also judge you ought not to eat it because of its effect on your health. You eat the cake for a reason, which is that you believe it to be delicious. That is a motivating reason for you to eat the cake, and it explains your eating it. But your act is not rational, because you eat the cake whilst judging you ought not to. In this case the motivating reason explains your act, but it does not explain rational action.
7 It is spelt out in Broome (2004). 88 JOHN BROOME Some philosophers would say that your motivating reason in this case is that the cake is delicious, rather than that you believe it to be delicious. 8 They take the reason to be the content of your belief rather than the belief itself. I shall not take sides on this issue. Either way, your act is not rational.
So, when you act for a reason, you may nevertheless act irrationally. It is commonly said that at least the explanation of why you act must involve your rational faculty. But I am not sure even that is so. Suppose your belief that the cake is delicious causes you to grab the cake and eat it absent-mindedly, without thinking. The same might happen in a dog. I think we might still say you ate the cake for a reason, but in this case your rational faculty is not involved.
The problem is that the concept of acting for a reason is unstable. It hovers between the normative sense of "a reason" and another sense that does not involve rationality at all. We use the latter sense when we say "The reason for the moon's red colour is that it is about to be eclipsed". Here "the reason for" means "the explanation of ". Using the same sense, we might say "The reason for your eating the cake was that you believed it to be delicious", even when your rational faculty is not involved.
Philosophers try to avoid the instability by means of careful grammar. They distinguish between the reason for your eating the cake and the reason for which you eat the cake. To say "The reason for your eating the cake was your belief that it was delicious" does not imply that your rational faculty was involved. To say "The reason for which you ate the cake was your belief that it was delicious" does imply your rational faculty was involved. I am not sure the ordinary concept of acting for a reason registers this distinction very accurately. But with careful philosophical practice, perhaps we can maintain the principle that, when you act for a reason, the explanation of why you act involves your rational faculty.
In any case, it is plain that you can act for a reason and yet act irrationally. Indeed, at first sight it is puzzling how acting for a reason can ever be rational, except in rare cases. In rare cases, there is just one normative reason to do a particular act. If you do the act for that reason, you are acting rationally. But for nearly every act, there are normative reasons to do it, and normative reasons not to do it. To be rational in those cases, you need to act on what you judge to be the balance of reasons; all the reasons need to figure in the explanation of your act. For nearly every act, therefore, it seems that acting for just one reason cannot be rational.
However, it surely can be. We often act for a single reason, and we are surely not irrational every time. So there is a puzzle. I see a number of different solutions to it. I shall mention some but not all of them. Each reveals a different sort of motivating reason. Together they help to show what a ragbag we are dealing with.
One solution is that sometimes you judge you ought to do a particular act, and you believe that a particular normative reason explains why you ought, and you do the act for that reason. You act for just this one reason, and your act is rational.
It may be that you believe the reason entails that you ought to do the act. For instance, suppose you believe you ought to see Mr Reed, and you believe that is because he is the best dentist around and you ought to see the best dentist around. Then, you might see Mr Reed for the reason that he is the best dentist around and you ought to see the best dentist around. If you act for this one conjunctive reason, your act is rational.
Alternatively, you might believe the reason explains why you ought to do the act, even though it does not entail that you ought to do it. One fact may explain another, even though further facts are relevant and would figure in a fuller explanation. For instance, there is a good crop of whortleberries this year, and the explanation is the plentiful rainfall that fell during the summer. Other factors were also relevant: frosts stopped early in the spring, and there was just about the right amount of morning sunshine. A fuller explanation would include these factors, but we may still say that the crop is good because of the plentiful rainfall. In saying this, we need not make any sharp distinction between explanatory factors and enabling conditions (see Dancy, 2004, ch. 3); we may leave it to the context to determine what counts as the explanation.
Suppose it is going to rain. You believe it is going to rain, and you judge that you ought to take an umbrella for that reason. You believe other reasons are also relevant. You believe that the inconvenience of carrying an umbrella is a reason not to take one, and that the slight chance you will need to fight off a marauder is a reason to take one. But you have an established belief that these other reasons are insignificant when it rains. You consequently believe that the fact it is going to rain is enough to make it the case that you ought to take an umbrella. If you take an umbrella for that one reason, your act is rational.
The dentist and umbrella examples are cases where you judge you ought to do a particular act, so they fit my scheme F of explanation. They are cases where you act for a single motivating reason that is also a normative reason. I could have incorporated the motivating reason into the diagram, as part of the explanation of why you judge you ought to do the act. It would have appeared to the left of the ought-judgement.
But I am more interested in how the ought-judgement rationally explains an act, rather than in how the ought-judgement is explained. In other cases the oughtjudgement will not be explained by a single reason. In order to keep the scheme general, I have therefore not incorporated a motivating reason here.
In other cases where you act rationally for a single reason, the reason for which you act is not a normative reason. Here is one type of case. Sometimes, when you act for a reason, the reason is that you judge you ought to do the act. Your judgement 90 is a motivating reason for you to do the act. So this is an instance of rationally acting for a single reason. There is no need to add a motivating reason to my diagram in this case, since the reason is already there in the form of your ought-judgement.
Your ought-judgement -though a motivating reason -is not a normative reason for you to do the act. As I said at the beginning of this section, a normative reason for you to do an act is something that plays a particular role in determining whether or not you ought to do the act. Except perhaps in weird cases, your judgement that you ought to do an act does not play any role in determining whether or not you ought to do it. So in a case like this, your motivating reason is not a normative reason.
Here is a very different type of case. Often you do something in order to fulfil an intention of yours. You take what you believe to be a means to an end that you intend. For instance, you catch a bus in order to get to a film you intend to see. When you act like this, we commonly say you act for a reason. We implicitly treat your intention as a motivating reason. It explains your act of taking the means, and explains it by a rational process. So here is another case where acting for a single reason is rational.
However, an intention is not a normative reason for acting. When you intend to do something, your intention does not constitute a reason for you to do it. Indeed, you may have no reason to do what you intend to do, and no reason to take any means to it. (I realize this is a controversial point, and unfortunately I cannot defend it here. 9 ) Still, if you do take a means in order to fulfil an intention, we say you act for a reason. It is an unfortunate feature of our concept of acting for a reason that it applies in this case. It does no harm if we are careful with the distinction between normative and motivating reasons. But we are sometimes careless, and then it leads to confusion.
Acting rationally to fulfil an intention is genuine rational action. It is an exercise of instrumental rationality, which I shall return to in section 7. I shall explain that it is very different from the rationality of doing something because you judge you ought to. It is governed by a quite different rational principle. It can be fitted into my diagram, but in a different place. This paper concentrates particularly on the rationality of doing something because you judge you ought to.
It only creates confusion to try and treat these two sorts of rationality together. They both fall under our concept of acting for a reason. This is not a useful concept because it includes such different things.

Intentions
I shall next tighten up the scheme of explanation for rational action by organizing it around a specific sort of motivation: an intention. I narrowed the notion of motivation in section 3; now I am narrowing it further. I shall not try to specify precisely what distinguishes an intention from other sorts of motivation. A quick way of describing the distinction is to say that an intention to do something involves a commitment to do it, whereas a motivation in general does not: you may be motivated to do something without being committed to doing it. To make this distinction more precise, I would need to specify just what sort of commitment is involved in an intention. I shall not try to do that here. 10 I arrive at:

Scheme G
Besides the change to an intention, scheme G differs from scheme F in three other ways. First, I have deleted the normative part of the scheme. Having made a case for an autonomous domain of normativity, I leave it aside.
Second, I have changed the hook between the ought-judgement and the motivation (now an intention) to an arrow. That is to say, I have dropped the internalist assumption. I assume it is possible for you to believe you ought to do something without intending to do it. When I was dealing with a motivation of a very general sort -any sort of disposition to do an act -there was a case for keeping the hook there. There is a case for saying that, if you believe you ought to do something, you must have some sort of disposition -perhaps defeasible -to do it. 11 On the other hand, common sense suggests that you might believe you ought to do something without intending to do it. This is the condition of an akratic, and common sense suggests akrasia is possible. I assume the connection between an ought-belief and an intention is contingent; I shall say more about it in section 8.
True, common sense is not decisive. Some metaphysical views entail there should be a hook there. According to Allan Gibbard's version of noncognitivism, a belief that you ought to F is nothing other than a sort of intention to F (Gibbard, 2003). I am setting these metaphysical views aside.
Third, I have changed "judgement" to "belief ". "Judgement" is Nagel's term. I think he uses it because he thinks of a belief as "merely classificatory", by which he means that it cannot by itself entail a motivation (Nagel, 1970, p. 109). So as long as I kept a hook in the diagram, I kept the term "judgement" there out of deference to Nagel. But my arrow does not signify entailment, so I am free to use "belief ", which I prefer.
Organizing the explanatory scheme around an intention has an important advantage in understanding rational action: it allows me to say more precisely where and how rationality is involved. Scheme G marks two steps on the way to doing an act: an ought-belief explains an intention, which in turn explains an act. Each of these steps may involve rationality. The first involves what I shall call "enkratic rationality", and it is regulated by a requirement of rationality that I call "Enkrasia". The second step usually (though not always) involves instrumental rationality, and it is regulated by what I shall call the "Instrumental Requirement" of rationality. Section 7 examines instrumental rationality and Section 8 enkratic rationality.

Instrumental Rationality and Reasoning
The commitment involved in an intention has a rational aspect. It is complicated to specify just what this amounts to. It is not simply that rationality requires you to do what you intend to do, because it does not require that. Often you change your mind and give up an intention before you carry it out, and in doing so you may be rational. So what exactly does rationality require of the process that goes from an intention to an act? In this section, I shall not try to answer the whole of that question. I shall concentrate on instrumental rationality, which is just one part of it.
This means I shall concentrate on cases where there is a call for instrumental rationality. These are cases where you believe that achieving the end you intend requires you to take some means to the end. Sometimes you can achieve an end that you intend without taking a means. When you intend to raise your arm, you may just raise it without using any means. I shall concentrate on cases that you believe are not like that.
This requirement of rationality applies to those cases: Instrumental Requirement. Rationality requires of you that, if you intend to E and you believe your Ming is a means implied by your Eing, you intend to M. This is only a rough formulation of the requirement; I have put an accurate one in a note. 12 When I say "you believe your Ming is a means implied by your Eing", I 12 Instrumental Requirement. Rationality requires of N that, if (1) N intends at t that e, and (2) N believes at t that, if m were not so, because of that e would not be so, and (3) N believes at t that, if she herself were not then to intend m, because of that m would not be so, then (4) N intends at t that m. MOTIVATION 93 mean you believe, first, that your Ming is a means to your Eing, and second, that if you were not to take this means, you would not E. The Instrumental Requirement explains why a rational person intends to do what she believes is a means implied by an end she intends: she would not count as rational if she did not. But it does nothing to explain how a person can come to be rational in that respect. In section 1 I raised the question of how an a priori investigation of rationality can tell us about empirical psychology. The answer is that it cannot. By an a priori investigation we can work out what rationality requires of us. But then we separately need to explain how a person comes to satisfy those requirements. That task remains.
We can start to fulfil it by saying that a rational person has a disposition to satisfy the Instrumental Requirement. This remark has a little bit of explanatory force. It does not say merely that a rational person is such that she satisfies the Instrumental Requirement; that would go no way towards explaining how she does so. It says that the explanation of why a rational person satisfies the Instrumental Requirement is located within the person's own constitution. A rational person is constituted in such a way that some sort of causal process within her commonly brings her to intend what she believes is a means implied by an end she intends. That is the beginning of an explanation.
But it leaves a lot still to be explained. What is the process by which the disposition works? Often it is through automatic processes that we do not control and are not conscious of. Much of our rationality is achieved that way. When you form the intention of getting coffee, you automatically find yourself intending to walk along to the coffee-room. You are disposed to intend what you believe is a means implied by an end that you intend, and in this case your disposition works through an automatic, unconscious process.
However, sometimes automatic processes fail to deliver the result. We are not automatically rational in all respects. Sometimes our automatic processes leave gaps in our rationality. Sometimes they leave us intending an end but not intending a means that we believe is implied by it.
In cases like this, we have a self-help device that can help us repair some of the gaps left by our automatic processing. We can improve our rationality by our own activity. This activity is conscious reasoning. In particular, we can bring ourselves to satisfy the Instrumental Requirement by a process of conscious instrumental reasoning.
In this paper, by "reasoning" I shall always refer to conscious reasoning. I cannot give a full account of it in this paper, nor try to justify my claim that it is an activity of ours -something we do. Instead, I shall briefly give an example of theoretical reasoning, followed by one of instrumental reasoning, in the hope that they appear plausible.
Suppose you are on a cruise. You wake up one morning to the sound of gulls, so you believe there are gulls about. You know that gulls are never far from land, so you know that if there are gulls about, land is nearby. However, since you are still groggy from sleep, you have not yet come to believe land is nearby. You have not yet brought your general knowledge to bear on your particular belief. I assume it matters to you whether land is nearby. Given that, you are at present not fully rational. Rationality requires you to believe what follows by modus ponens from things you believe, if it matters to you. But you do not. Automatic processes have let you down.
However, you can improve your rationality by your own activity. You can say to yourself: "There are gulls about; if there are gulls about, land is nearby; so land is nearby". Here you call to mind the contents of your premise-beliefs. You put these contents together in a way that brings you to draw a conclusion from them, and you come to believe the conclusion. The last of your three sentences expresses a new belief that you acquire in the course of your reasoning. Once you have this belief, you satisfy the requirement of rationality to believe what follows by modus ponens from things you believe, if it matters to you.
Now an example of instrumental reasoning. Suppose you intend to visit Venice and, as the date approaches, your travel agent reminds you that you have not bought a ticket. You believe you will not visit Venice unless you buy a ticket. However, automatic processes have not given you the intention of buying a ticket. You may say to yourself: "I am going to visit Venice; I shall not visit Venice unless I buy a ticket; so I shall buy a ticket". The first of these sentences expresses your intention of visiting Venice. The second expresses your belief that you will not do so unless you buy a ticket. The third express an intention to buy a ticket. You did not previously have this intention, but you acquire it in the course of your reasoning.
It is important to recognize that no normative belief enters this reasoning. Nor does the Instrumental Requirement of rationality mention any normative belief. You do not derive your intention to take the means from a normative belief that you ought to take it. You derive it from your intention to achieve the end.
Indeed, you may not have a belief that you ought to take the means. For one thing, you may not believe you ought to achieve the end. You may have formed your intention to achieve the end on a whim. You may even have formed it akratically: you may have decided to go for this end while believing you ought not to. Even so, once you intend the end, the Instrumental Requirement applies to you. If you do not intend what you believe is a means implied by it, you are not rational.
I can add some instrumental rationality into my scheme for explaining rational action. I get: Scheme H Instrumental rationality appears towards the right of this scheme, downstream from your intention to achieve the end. Instrumental rationality is called for in action only once you have an intention. It leads from intending an end to intending a means.
In contrast, Nagel puts instrumental rationality into the domain of reasons. I believe he is thinking of normative reasons. This puts it at the left of the scheme, upstream from the ought-belief. In scheme F, it puts it in the top, normative part. Nagel's account of instrumental rationality is that reasons for action "transmit their influence over the relation between ends and means" (Nagel, 1970, p. 34). He means that, if you have a reason to achieve an end, that gives you a reason to take a means to the end. That is supposed to explain how rationality requires you to take a means to your end.
It is not necessarily true that, if you have a normative reason to achieve an end, that gives you a normative reason to take a means to it -at least not any means (see Broome, 2005). But in any case, the transmission of reasons from an end to a means cannot explain instrumental rationality. You may intend an end that you have no reason to intend, and that you believe you have no reason to intend. The transmission of reasons from ends to means cannot give you any reason to intend a means to that end. Nevertheless, instrumental rationality requires you to intend a means to it.
Since instrumental rationality does not stem from a normative belief, it is very different from enkratic rationality, which I am coming to in the next section. There is more than one sort of rational action, and more than one scheme for explaining it. In this paper apart from this section, I have been considering rational action that stems from a normative belief. I now revert to that subject, leaving instrumental rationality aside.

Enkrasia and Reasoning
Now the left-hand explanatory step in scheme G, where an ought-belief explains an intention. This is the motivating connection; it is where you are motivated to act. How does that happen? I said in section 6 that the connection between an ought-belief and an intention is contingent: you can believe you ought to F without intending to F. However, the connection holds necessarily in a fully rational person. Necessarily, if you believe you ought to do something and do not intend to do it, you are not fully rational. Indeed: Enkrasia. Rationality requires of you that you intend to F if you believe you ought to F. This is only a rough statement of Enkrasia; I have put an accurate one in a note. 13 Here, I cannot defend Enkrasia by argument. I shall appeal to tradition instead. An akratic person is one who does not intend to do what she believes she ought to do. Akrasia has traditionally been regarded as a sort of irrationality. I am following that tradition.
Nagel explicitly denies the existence of a principle such as Enkrasia. To understand his meaning, remember that Nagel calls an ordinary belief "merely classificatory". About ought-beliefs he says: If they were merely classificatory then a conclusion about what one should do would by itself have no bearing on a conclusion about what to do. The latter would have to be derived from the former, if at all, only with the aid of a further principle, about the reasonableness of doing what one should do. But then the original judgement about what one should do . . . would have turned out not to be a practical judgement at all, but merely a classification belonging among the premises of a genuine practical judgement. (Nagel, 1970, pp. 109-110; see the similar remark on p. 65) In contrast with Nagel, here is how I see things.
A conclusion about what one should do is just a belief; it is merely classificatory if you want to put it that way. But it does have a bearing on a conclusion about what to do. Its bearing is that rationality requires you to do what you believe you should do. This is Enkrasia, which Nagel would see as a "further principle" of rationality. The original judgement is practical in that it is a judgement about what one should do. I shall explain that it can lead to an intention, which is a directly practical attitude.
Enkrasia explains why a rational person intends to do what she believes she ought to do: she would not count as rational if she did not. It remains to be explained how she can come to be rational in that respect. Section 7 provides a prototype explanation, which we can follow here.
First, we may say that a rational person has a disposition to satisfy Enkrasia. That is the beginning of an explanation, but only the beginning. We need to know the process through which the disposition works. Often this process is automatic. I said in section 7 that much of our rationality is achieved automatically, and this includes satisfying Enkrasia. When, after noticing the time, you come to believe you ought to go home soon, you may automatically come to intend to go home soon.
13 Enkrasia. Rationality requires of N that, if (1) N believes at t that she herself ought that p, and (2) N believes at t that, if she herself were then to intend p, because of that, p would be so, and (3) N believes at t that, if she herself were not then to intend p, because of that, p would not be so, then N intends at t that p. This formula contains some technical devices, which are needed to make it accurate. "She herself " is a reflexive pronoun, and the ungrammatical "ought that" is needed as a way of stating that the ought is owned by N.