Navigation – Plan du site

AccueilNuméros9DossierUtilitarianism and Evaluative Con...

Dossier

Utilitarianism and Evaluative Conflict between Actions

Makoto Suzuki

Résumés

Dans son acception courante, l’utilitarisme, appelé aussi conséquentialisme, implique souvent un conflit entre l’action prescrite et les actions préliminaires (les prérequis) de cette action même. Supposons une action qui aurait des conséquences positives : par exemple envoyer un chèque à une ONG. L’utilitarisme implique qu’il faut, dans ce cas précis, envoyer effectivement un chèque à cette ONG. Mais considérons à présent deux des prérequis de cette action : faire le chèque, ou même apporter le chèque à la poste. Ces actions, en tant que telles, peuvent ne pas avoir les meilleures conséquences. Par exemple, il se peut que par faiblesse vous ne postiez pas le chèque. Dans ce cas, il ne serait pas moralement obligatoire de faire le chèque ou de le poster. Ainsi, il se peut qu’une action soit bonne, tandis que ses parties (ou prérequis) ne le sont pas. Ainsi, on doit faire une action donnée tout en ne devant pas effectuer les actions préalables qu’elle implique, car chacune de ces actions a des conséquences différentes. Cet article examine certaines solutions à ce dilemme: 1) les prérequis ne sont pas véritablement des actions qui exigent un choix moral ; 2) le statut déontique de l’action finale est déterminé par ses parties (ou prérequis) ; 3) le statut déontique de l’action détermine le statut de ses parties (ou prérequis). Après avoir montré les difficultés qui accompagnent chacune de ces options, l’article montre comment dépasser celles de la 3ème d’entre elles.

Haut de page

Dédicace

I thank Justin D'Arms and Don Hubin for their penetrating comments. Thanks also to those who gave me questions or comments on the presentation of this paper at “Two Centuries of Utilitarianism” Conference held on 4-5 June 2009 at University of Rennes II, France.

Texte intégral

Introduction

  • 1  In this paper I will use “utilitarianism” and “consequentialism” interchangeably. Some people defi (...)
  • 2  As this characterization of Utilitarianism/Consequentialism indicates, this paper focuses on direc (...)
  • 3  Louise, Jennie, “Right Motive, Wrong Action: Direct Utilitarianism and Evaluative Conflict.” Ethic (...)
  • 4  Bergstörm, L., The Alternatives and Consequences of Actions. Stockholm: Almqvist & Wiksell, 1966 ; (...)
  • 5  Some parts or prerequisites of an action might not be actions. So strictly speaking, I should say (...)

1Utilitarianism, aka Consequentialism1 —commonly understood as the account according to which actions (and other evaluands) are right if and only if and because they have the best consequences2— is criticized for allowing evaluative conflict between motive and action: a right motive might lead an agent to perform a wrong act. This might be unproblematic,3 but Utilitarianism allows another evaluative conflict, the one between actions. As first shown by Bergström and Castaneda, taken literally, the common account of Utilitarianism means that usually the prerequisite or proper part of what you ought to do is not what you ought to do.4 Suppose your sending a check to a certain charity, say, Oxfam, would have the best consequences if performed. Then, according to Utilitarianism, you ought to send a check to Oxfam. Now consider whether the act of writing a check or even bringing the check in front of a post office would have the best consequences if performed. Perhaps it wouldn’t: you might fail to post the check to Oxfam for, say, weakness of will. Then, you ought not to write or bring the check. Thus, it is possible that you ought to perform an action while you ought not to perform its part or prerequisite, because they can have different consequences. This is problematic because one cannot perform an action without performing its parts and prerequisites, and evaluative conflict between the action and its part or prerequisite leaves us wondering what to do.5

2This paper considers several responses to this performative evaluative conflict, such as: (1) the action you might fail to perform is not an option, i.e., not what one can judge whether to perform; (2) the deontic status of action is determined by its parts and prerequisites; and (3) the deontic status of action determines the status of its parts and prerequisites. After pointing out the problems of each response, I present a fix for response (3) as the best solution.

1. Utilitarianism and Evaluative Conflict between Actions: the Analysis of the Problem

3Suppose walking along the beach, you find a child drowning. You can succeed in picking up the child now and your doing so will have the best consequences. Then, according to Utilitarianism, you ought to pick up the drowning child now. However, you might fail to pick up the child now. You might be an imperfect swimmer; the child might not be cooperative; or perhaps the indeterministic world just does not love you much. Thus, entering into the sea, which is a part or prerequisite for picking up the child, might not have the best consequences. Then, while you ought to pick up the drowning child, you ought not to enter the sea.

4This “Drowning-Child” case and the above “Donation” example illustrate the possibility that even if an action has the best consequences, performing merely its part or prerequisite does not. In such a case, according to Utilitarianism, while you ought to perform the action, you ought not to perform the part or prerequisite. This is problematic because you cannot perform an action without performing its parts and prerequisites.

A(P1, P2, P3, …Pn ) → Ca
But
Pk → Cpk(, which is different from Ca)
A: Action; Pn: Part or Prerequisite of A; Ca: Consequences of A; Cpk: Consequences of Pk

  • 6  Feldman, Fred, Doing the Best We Can: An Essay in Informal Deontic Logic, Dordrecht and Boston: Re (...)

5Fred Feldman has made this point by pointing out that action and its part or prerequisite have different causal capacities.6This is misleading because if we adopt the normal account of consequences of action—what would happen if action were performed—action and its parts or prerequisites can have the same best consequences even if they have different causal capacities. That is, if people carry through the action with the best consequences—the optimal action—whenever they carry out a part or prerequisite of it, then their performing the part or prerequisite of the optimal action will have the best consequences, too. For given the supposition, if you perform a part or prerequisite of the optimal action, you will also perform all the other parts and prerequisite for the optimal action, which leads to the same best consequences. So if people necessarily carry through the action with the best consequences whenever they carry out a part or prerequisite of it, the problem will not arise.

6 Actually, however, people routinely fail to carry through an action when they perform a part or prerequisite of that action. For one thing, they might not have an intention to carry through the whole action. In the “Donation” example, even if people have an intention of going out, which is a prerequisite or part of sending a check to Oxfam, they might not have the intention to take the following steps required to mail the check to Oxfam. Then, even if these people go out, they might not send the check to Oxfam. For another thing, even if people intend to carry through an action, they might fail. For example, even if you intend to type “philosophy”, you might rather type “phildophey”. Or even if you intend to put contact lenses in your eyes, you might rather drop them in the sink. People sometimes fail even to move their limbs in the intended way. Omission is no different from action in this respect. For example, people often intend to stop jiggling their legs but fail. Such a failure to carry trough an intended action ―the broad sense of “action” in which omission can be a type of action― occurs for environmental factors, for psychological problems, such as weakness of will, or perhaps for the indeterminacy of causation in general.

7 Castaneda’s semi-formal presentation of the problem (ibid.) takes the form of an inconsistency charge against Utilitarianism. He supposes the inverse of the agglomeration principle, i.e., that obligatoriness distributes through logical conjunction: (DC) If S ought to do A and B, then S ought to do A, and S ought to do B. Utilitarianism holds that S ought to do A if and only if the net-value of A exceeds that of each alternative. Suppose that S ought to perform an act with two parts, P and Q, which we can call “P & Q”. Given (DC), S ought to perform P and S ought to perform Q. If Utilitarianism is true, then the net-value of P & Q exceeds that of any alternative, including that of P and that of Q. Further, since S ought to do P, and S ought to do Q, it follows from Utilitarianism that the net-value of P (or of Q) exceeds that of any alternative, including that of P & Q. This is a clear contradiction.

8 However, this original presentation does not get to the crux of the matter. For it can be taken to show not that Utilitarianism violates (DC), but just that Utilitarianism in Castaneda’s sense, combined with (DC), implies that it is never the case that someone ought to perform an act with two (or more) parts. Utilitarians can still maintain that it is right for someone to perform a certain act with two (or more) parts, for they think that rightness requires not the uniquely optimal consequences but merely one of the best consequences. And there also seems to be a quick fix to the above problem about obligatoriness. You might claim that Utilitarianism does not hold that S ought to do A if and only if the net-value of A exceeds that of each alternative. Strictly speaking, it holds that S ought to do A if and only if the net-value of A exceeds that of each alternative other than actions to which A has a part-whole relationship or together with which A constitutes a larger action, and the net-value of A is equal to the net-value of each such action (that is, of each action to which A has a part-whole relationship or together with which A constitutes a larger action). If Utilitarianism is true and S ought to do P & Q, given (DC), S ought to do P and S ought to do Q. However, given the above formulation of Utilitarianism, the net-value of P & Q might not exceed that of P or that of Q; moreover, the net-value of P might not exceed that of P & Q and that of Q, and the net-value of Q might not exceed that of P & Q and that of P, either. Because the net-value of P & Q can be equal to that of P and that of Q, the inconsistency does not follow from (DC) and thus-formulated Utilitarianism alone.

9 This manoeuvre, however, ultimately fails to save Utilitarianism. As I have pointed out, because people often fail to carry through the whole action after finishing off its initial part(s), it is often the case that the net-value of P & Q differs from either that of P or that of Q. Given this assumption, together with (DC) and Utilitarianism formulated above, contradiction still follows, or so it seems. Under this assumption, together with Utilitarianism and the distributivity of rightness through logical conjunction, it is not even right for someone to perform a certain act with two (or more) parts. Suppose P & Q is right. Then, due to the distributivity of rightness through conjunction, P is right and Q is right. According to Utilitarianism, the net-value of P & Q is optimal. For the same reason, that of P and that of Q are optimal. However, given the assumption that the net-value of P & Q differs from either that of P or that of Q, this cannot be the case. In this way, the most serious challenge to Utilitarianism comes from people’s performance failure and the resultant differences of the consequences between an action and its parts or prerequisites.

  • 7 Nonetheless, some people accept this result. See, for example, Jackson, Frank and Pargetter, Robert (...)
  • 8  See Theorem (OB-M) in McNamara, Paul, “Deontic Logic”, The Stanford Encyclopedia of Philosophy (Fa (...)
  • 9  A similar possible way-out is to deny that an action and its parts (or prerequisites) can be alter (...)

10 Thus, apparently utilitarians have to reject the distributivity of obligatoriness (and rightness) over conjunction, but this result is not easily acceptable.7 After all, the distributivity of obligatoriness is a theorem of standard deontic logic.8 One possible way-out is restricting the range of options.9 Strictly speaking, Utilitarianism holds that S ought to do A only if A is an option for S. And because the source of the problem is people’s failure to complete the whole action, one might say that if people definitely fail to perform an action—by which I mean the probability of its performance is absolutely zero—then it will not be an option for them and Utilitarianism will not tell them to perform the action. Whatever good consequences the action would have, it would not be what they ought to do. So if you definitely fail to carry through an action, e.g., sending a check to Oxfam or picking up a drowning child, then evaluative conflict will not arise between the action and its part or prerequisite. In Castaneda’s terms, if P & Q is not an option because S will definitely fail to perform it, then the trouble will not arise.

  • 10  Another possible concern is that certain defective agents will avoid the obligations that normal a (...)

11 One problem with this reply10 is that, in general, it is not the case that your failure, or the success for that matter, is certain. You might or might not fail to carry through actions, e.g., sending a check to Oxfam or picking up a drowning child. Environmental factors, psychological problems, and the indeterminacy of causation often make the agent’s accomplishment precarious but not completely unreachable: the success or failure of performance is often a matter of higher or lower probability. This is why we cannot easily avoid evaluative conflict between actions: an agent ought to perform an action with the best consequences, but she might or might not carry through the action, so, given the risk of failure, performing a part or prerequisite of the action would not have the best consequences and hence would not be the right action to perform.

  • 11  Making this statement, I suppose that objective probable consequence or expected consequence is th (...)

12 To make the problem more vividly, let me expand on the cost-benefit of the “Drowning-Child” case. You can succeed in picking up the drowning child now, and if you do so, you have the 90% chance of saving the life of the child. If you do not pick up the drowning child, the chance of saving the child’s life is zero, whatever else you attempt. Given that your picking up the child largely depends on your safe return, your health and life risk from picking up the child is slight: your success in picking up the child virtually guarantees your safety. In this case, it is plausible that your picking up the drowning child has the best consequences—what would happen if you pick up the drowning child is the best—and Utilitarianism tells you to perform the action. Now, your entering into the sea, a part or prerequisite for picking up the child, would have the same consequences if things go well and it becomes a successful step toward picking up the child. And things might go well. However, of course you might enter into the sea but fail to pick up the child. If you enter into the sea but fail to pick up the child, you will have no chance of saving the child’s life. And, your health and life will be threatened. Thus, because entering into the sea might not lead to picking up the child, your entering into the sea does not have the same consequences as picking up the drowning child.11 So, even though you ought to pick up the drowning child, you ought not to enter into the sea. You ought to perform an action, but you ought not to perform a part or prerequisite of the action.

13 I will consider several possible responses below.

2. Solution (1): the Action You Might Fail to Perform Is Not Your Option

14One initially tempting response to the above problem is to claim that the action you might fail to perform is not your option. In the “Drowning-Child” example, picking up the drowning child is something you might fail to do, so it is not your option. Thus, whatever the consequences, it is not the case that you ought to pick up the child.

  • 12  Zimmerman’s account includes the proposal to the effect that the options be limited to what the ag (...)

15If we elaborate this view, it will be something like this: “We can distinguish the action the performance of which the agent’s will guarantees from the action the performance of which requires the cooperation of the world, external and internal to the agent. Sure, we often talk as if many actions ―for example, picking up a drowning child― belong to the former category, but our considered view is that such an action actually does not belong to that category because we sometimes fail to perform the action even if we intend to do so. Rather, the more basic actions, e.g., entering water, are the actions the performance of which the agent’s will guarantees, regardless of how the world works. Normative theories, Utilitarianism in particular, should regard only these actions as options. There is nothing wrong in holding that only some actions are really options and in determining what to do given the consequences of successfully performing these actions.”12

  • 13  I take trying to do A to be more than intending to do A. That is why you might fail to try to do A (...)

16According to this view, it is not the case that you ought to pick up a drowning child ―or to perform anything that one might fail to perform even if he intends to do so. But if we can still say that you ought to enter the sea (when you can surely save the child), perhaps this result is bearable. However, actually even these actions are not insulated from the influence of the world. You, the agent, might stumble over a slippery rock and fail to enter the water. You might be so weak-willed or afraid that you fail to put in the effort necessary for even trying to enter the water.13 If we only take as options the actions the performance of which the agent’s will guarantees, we wind up restricting options to the decisions or intentions of an agent.

  • 14  Don Hubin points out to me that we cannot observe an action when the action involves a certain int (...)

17And there are several reasons we do not want to limit options to the decision of an agent. First, intuitively, we ought to perform many actions ―for example, picking up a drowning child― which are not themselves intentions. However, on the current view, besides making certain decisions, there is no action one ought to perform. Second, the current move is an attempt to change the subject where in practice you cannot. In ordinary conversations and deliberations, people consider what one ought to do, and not what one ought to intend. This habit will not change so easily. And even if we quit telling what one ought to do, others still tell us what one ―including you― ought to do, so it is practically important to have a proper way of assessing claims of what one ought to do. Third, there is a reason for the practice of telling what one ought to do rather than what one ought to intend. We can observe people’s action but not their intention behind it (or at least, their action comes far closer to our observation).14 Thus, we can generally check whether people comply with what people ought to do, but not directly whether they comply with what they ought to intend. The possibility of check and the appraisal based on the check is an important merit of having a normative discourse. Thus, people have a reason to resist entirely replacing discourse on what one ought to do with discourse on what one ought to intend.

18In addition to these problems, this solution does not completely avoid the problem of evaluative conflict between actions. Remember, one of the reasons that an action and its part or prerequisite have different consequences is that even if the agent intends to perform the part or prerequisite, she might not have the intention to take the following steps to carrying through that whole action. Even if we limit the options for an agent to their possible intentions, this source of the problem does not vanish. Suppose that your intending to send a check to Oxfam has the best consequences. Intending to write a check is a part or prerequisite for having that intention. Now, if you do intend to write a check, will you necessarily intend to finish sending a check to Oxfam? You might not. So, given this risk, your intending to write a check and your intending to send a check to Oxfam can have different consequences. Evaluative conflict between actions appears again.

3. Solution (2): the Deontic Status of Action Is Determined by Its Parts and Prerequisites

19Another possible solution is to make the deontic status of action depend on its parts and prerequisites. Even if an action has the best consequences, it is not the right thing to do if some of its prerequisites or parts do not have the best consequences. In the “Drowning-Child” Case, because your entering into the sea does not have the best consequences, you ought not to perform the action. According to the above view, you ought neither to pick up the child, since this has entering into the sea as a part or prerequisite.

20While this view precludes evaluative conflict, it has a huge cost. Whenever the optimal action has a prerequisite or part that does not have the best consequences, there is no right action to perform. And the optimal action will usually have such a prerequisite or part. As we saw above, actions are generally something the agent might fail to perform. This will create the gap between the consequences of actions and the consequences of some of its parts or prerequisites. So, in general, even if the consequences of an action are the best, the consequences of some part or prerequisite will not be the best; according to the above view, this implies that there will be no right action to choose in most contexts.

4. Solution (3): the Deontic Status of Action Determines the Status of its Parts and Prerequisites

  • 15  You can get the same result if you adopt possibilism, the view that the deontic status of action d (...)

21You are perhaps tempted to reverse the direction of determination, and hold that the deontic status of action determines the status of its parts and prerequisites. If an action has the best consequences, you ought not only to perform that action but also its parts and prerequisites.15

  • 16  Possibilism has the same problem, too. In the above situation, not only picking up the child could (...)

22This precludes evaluative conflict between actions, and I think this solution is a step toward the right solution. However, as it stands, it has a counter-intuitive consequence. Suppose, in the “Drowning-Child” case, the probability of successfully picking up the child is only 5%. Further suppose that you will almost certainly die if you enter into the sea but fail to pick up the child. Still, you can pick up the child, and the consequences of successfully doing so are the best. According to the above view, this makes it the case that you ought not only to pick up the child but also enter into the sea, perhaps fully realizing that both you and the child will almost surely die. However, even if we might still be somehow tempted to say that you ought to pick up the child, we would really hesitate to claim that you ought to enter into the sea. As this example illustrates, the above view often requires that you ought to perform a part or prerequisite of an optimal action, which would have awful consequences overall.16

5. Proposal: Taking into Account the Risk of Failure

23To address the problem of the solution (3), we need to take into account not only the consequences of an action but also the risk of failing to perform the action. If we determine what we ought to do just by what consequences actions have, we might end up recommending a part or prerequisite of an action, which would have awful consequences. In addition, even if an action itself has the best consequences, you might not want to recommend it if the agent does not have much chance of succeeding in carrying it out. So we need to take into account the risk of failing to perform an action. Once we realize that whether or not you will manage to perform an action is a matter of probability, this might be a natural step to take.

24However, it will not do to stipulate a qualitative distinction between options and non-options somewhere on the continuum of probability. What I am warning against is the following line of thought. As for certain things, such as lifting your right arm, you usually succeed in doing so if you so intend. They are taken to be options. As for other things, such as winning a chess match against the world champion, you often fail to do so even if you intend to win. They are taken to be non-options. Because, in the above example, the probability of successfully picking up the child is only 5%, it is not an option, so it is not the case that we ought to pick up the child, or enter the sea, for that matter; counter-intuitive consequences are avoided. This view faces an obvious problem. There is no agreement upon where to put the line on the continuum, and people draw the line arbitrarily. In fact, because the difference between “what you will manage to do” and “what you will not manage to do” is really a matter of probability, it is impossible to avoid arbitrariness in drawing the line.

  • 17  Now I am not criticizing setting an arbitrary threshold for successful performance rate as part of (...)

25Further, this arbitrariness is a vicious one. Drawing a line runs the risk of ignoring important probabilistic closeness. Suppose you decide to take the actions of a more than 50 % successful performance rate to be options; you take the other actions to be non-options. If going on a diet has a 50% successful performance rate ―if its performance has a 50% chance― this will be an option, but if it has a 49.9% successful performance rate, this will not be an option. Suppose that you have estimated that going on a diet has a 50% successful performance rate. It has consequences better than any alternative options do, so it seems that you ought to perform the action. Now upon further investigation, it turns out that the successful performance rate turns out to be 49.9% rather than 50%. Should you abruptly quit going on a diet now because it turns out that going on a diet is not an option? This reaction seems to be unreasonable, but if the line between options and non-options is drawn in the above way, this is what you ought to do. Notice that this problem of ignoring probabilistic closeness emerges wherever you draw a line on the continuum. For example, if someone picks a 90% successful performance rate as the border, you can reasonably ask whether you ought to avoid performing the action of an 89.9% success rate whatever the consequences of its successful performance are.17

  • 18  Arbitrariness and ignoring the cost of failure are problems even for other more sophisticated prop (...)

26Another problem of the current view is that it focuses on the probability of successful performance and ignores how serious the cost of the failure is. Suppose two (successfully performed) actions ― say, climbing a steep cliff and hitting a golf ball with a driver ― have the same successful performance rate and even have the same consequences ― say, the same amount of enjoyment. Then, according to the above account, both or neither are commendable. However, because the cost of failing to climb a steep cliff is far more serious than the cost of failing to hit a golf ball with a driver, intuitively it can be the case that one ought to perform one ― hitting a golf ball with a driver ― but not the other ― climbing a steep cliff.18

  • 19  I have in mind those who take relevant consequences to be objective probable consequences or expec (...)

27Considering this, we need to take into account not only the probability of failure but also the costs (and benefits) of the failure. How should we do so? This is not that easy. Utilitarians19 traditionally think that the right thing to do is determined by overall goodness in the consequences of action G(C of A) times the probability of the consequences given the action P(C/A): that is, G(C of A)×P(C/A). To take into account the risk of failing to perform the action, we might multiply this with P(A/I), that is the probability of the action given a certain intention, typically the intention to perform the action. However, G(C of A)×P(C/A)×P(A/I), or even G(C of I & A)×P(C/A)×P(A/I), does not take into account the consequences of failing to do A given I, that is G(not A given I).

  • 20  See Kavka, Gregory S., “The Toxin Puzzle”, Analysis 43(1), 1983, pp.33-36. The situation is as fol (...)

28One possible way to take into account the consequences of failure is to make the deontic status of action depend on what consequences are had by the intention to perform the action. According to this view, in the “Drowning-Child” example, the rightness or wrongness of picking up the child depends on what consequences intending to pick up the child has. Because these consequences involve the consequences of failing in picking up the child as well as those of success, this view takes into account the consequences of failure. However, as Kavka’s toxin puzzle example illustrates,20 the consequences of an intention might be great while the intended action has awful consequences. Intending to have a bottle of poison might bring you lots of money and have the best consequences (given that you manage not to have the poison), but actually having the poison will make you violently ill, which is a really bad consequence. And in such cases, many of us are hesitant to claim that we ought to perform the intended action, i.e., having the bottle of poison.

  • 21  If more than two pairs of action and intention satisfy these conditions, all of those actions (as (...)

29Here comes the basic idea of my two-step proposal, whose details should be explored at another occasion. Consider what intention, among all intentions, has the best consequences. Further, consider whether the intended action has consequences better than, or at least as good as, any other actions (where the set of actions does not include intentions). If the answer is affirmative, then the intended action is the right thing to do; and every part and prerequisite for the action is also the things to do (even if it, by itself, does not have the best consequences).21

  • 22  Again, if more than two intentions have the best consequences, all of them (as well as their parts (...)

30Now, in some cases, the answer is negative, that is, the intended action does not have the best consequences. We do not want to say that there is nothing for the agent to do. So I say, in these cases, the right thing to do is having that very intention, and every part and prerequisite for having that intention.22 All (the other) actions are not required; they are not even permitted. They are rather not actions that, at the time of evaluation, one can say whether to do.

31This will solve evaluative conflict between actions without calling actions with awful consequences “the things to do.” If it is highly probable that you will die in vain by entering the sea to save the drowning child, you will not be required to save the child, or to enter the sea. The traditional utilitarian account determines the deontic status of each action in terms of whether it has the best consequences. This allows evaluative conflict between actions to happen, for actions and their parts or prerequisites can have different consequences. My two-step proposal makes the deontic status of an action (or intention) determine the deontic status of these parts and prerequisites. This is basically a type of solution (3). However, as we saw in examining solution (3), if simply the action with the best consequences is the right thing to do and makes its parts and requisites the things to do, we might be required to perform the parts or requisites that have no good consequences. So the current proposal takes into account the probability and disadvantage of failure by considering the consequences of intending actions, which involves the consequences of intending so but failing to carry it through. This way, we are free from evaluative conflict and unreasonable requirements.

Concluding Remarks

32This paper reviews and analyzes the problem for Utilitarianism, aka Consequentialism, of evaluative conflict between actions. People might fail to complete the whole action while finishing off its prerequisite(s) or part(s), and then the former and the latter might have different consequences. As a result, the common understanding of Utilitarianism violates the distributivity of obligatoriness through logical conjunction. To avoid this unattractive result, I consider three alternatives: (1) the action you might fail to perform is not an option, i.e., not what one can judge whether to perform; (2) the deontic status of action is determined by its parts and prerequisites; and (3) the deontic status of action determines the status of its parts and prerequisites. After pointing out the problems of each response, I have presented a fix for response (3) as the best solution. According to (3), the action with the best consequences is the right thing to do and makes its parts and prerequisites the things to do. Then, we might be required to perform the parts or prerequisites that have no good consequences. So my proposal takes into account the probability and disadvantage of failure by considering the consequences of intending actions, which involves the consequences of intending so but failing to carry it through. This way, Utilitarians can not only maintain the distributivity of obligatoriness through logical conjunction but also stay free from unreasonable requirements.

Haut de page

Bibliographie

Bergstörm, L., The Alternatives and Consequences of Actions. Stockholm: Almqvist & Wiksell, 1966

Castaneda, Hector-Neri, “A Problem for Utilitarianism.” Analysis 28(4), 1968, pp.141-142

Carlson, Erik, “Utilitarianism, Alternatives, and Actualism.” Philosophical Studies 96, 1999, pp.253-268

Feldman, Fred, Doing the Best We Can: An Essay in Informal Deontic Logic, Dordrecht and Boston: Reidel Publishing Company, 1986

Jackson, Frank and Pargetter, Robert, “Oughts, Options, and Actualism.” The Philosophical Review 95(2), 1986, pp.233-255

Kavka, Gregory S., “The Toxin Puzzle”, Analysis 43(1), 1983, pp.33-36

Louise, Jennie, “Right Motive, Wrong Action: Direct Utilitarianism and Evaluative Conflict.” Ethical Theory and Moral Practice 9, 2006, pp.65-85

McNamara, Paul, “Deontic Logic”, The Stanford Encyclopedia of Philosophy (Fall 2010 Edition), Edward N. Zalta (ed.), URL: <http://plato.stanford.edu/archives/fall2010/entries/logic-deontic/>

Vessel, Jean-Paul, “Defending a Possibilist Insight in Consequentialist Thought.” Philosophical Studies 142, 2009, pp.183-195

Zimmerman, Michael J., “The Relevance of Risk to Wrongdoing”, Chapter 9 (pp. 152-170) of The Good, the Right, Life and Death: Essays in Honor of Fred Feldman, McDaniel, K. et al. (eds.), Aldershot : Ashgate, 2006

Haut de page

Notes

1  In this paper I will use “utilitarianism” and “consequentialism” interchangeably. Some people define utilitarianism more narrowly, making it imply a particular theory of value, i.e., welfarism: the view that the only thing that ultimately matters is the well-being of individuals. This narrower notion of Utilitarianism still faces the same problem that this paper discusses, because the problem arises from the basic structure of Utilitarianism/Consequentialism, and welfarism does not solve or alleviate this problem.

2  As this characterization of Utilitarianism/Consequentialism indicates, this paper focuses on direct and act Utilitarianism, and not on indirect Utilitarianism, for example, rule Utilitarianism and motive Utilitarianism. Further, by making best consequences a necessary condition for rightness, I exclude Satisficing Utilitarianism from the ensuing considerations. This is not because I am sure that these views escape the problem (of evaluative conflict) this paper addresses, but because the space of this paper is limited and I take maximizing act/direct Utilitarianism to be the most promising.

3  Louise, Jennie, “Right Motive, Wrong Action: Direct Utilitarianism and Evaluative Conflict.” Ethical Theory and Moral Practice 9, 2006, pp.65-85.

4  Bergstörm, L., The Alternatives and Consequences of Actions. Stockholm: Almqvist & Wiksell, 1966 ; Castaneda, Hector-Neri, “A Problem for Utilitarianism.” Analysis 28(4), 1968, pp.141-142.

5  Some parts or prerequisites of an action might not be actions. So strictly speaking, I should say that when the part or prerequisite of an action in question is also an action, there will be evaluative conflict between actions. For simplicity’s sake, I will hereafter ignore this complexity.

6  Feldman, Fred, Doing the Best We Can: An Essay in Informal Deontic Logic, Dordrecht and Boston: Reidel Publishing Company, 1986, pp.5-7.

7 Nonetheless, some people accept this result. See, for example, Jackson, Frank and Pargetter, Robert, “Oughts, Options, and Actualism.” The Philosophical Review 95(2), 1986, pp.233-255, esp. section 6, and Carlson, Erik, “Utilitarianism, Alternatives, and Actualism.” Philosophical Studies 96, 1999, pp.255-256.

8  See Theorem (OB-M) in McNamara, Paul, “Deontic Logic”, The Stanford Encyclopedia of Philosophy (Fall 2010 Edition), Edward N. Zalta (ed.), URL: <http://plato.stanford.edu/archives/fall2010/entries/logic-deontic/>, section 2.1.

9  A similar possible way-out is to deny that an action and its parts (or prerequisites) can be alternatives to each other: that is, to deny that they belong to the same set of options whose consequences are compared with each other. Bergström demands that alternatives must be pairwise incompatible so as to get this result (The Alternatives and Consequences of Actions, p.29). As Carlson (“Utilitarianism, Alternatives, and Actualism”, pp.254-255) points out, this requirement implies that there are usually many different sets of jointly exhaustive alternatives for a given agent in a given situation. Applying Utilitarianism to different sets, for the same agent in the same situation, can yield mutually incompatible prescriptions. Setting a proper criterion for selecting the set to which Utilitarianism should be applied to is difficult (See Bergström,The Alternatives and Consequences of Actions, chapter 2).

10  Another possible concern is that certain defective agents will avoid the obligations that normal agents incur. Suppose that P& Q has the best consequences though P has much worse outcomes if Q is not performed. Normal agents can perform the pair, but, because of their laziness or viciousness, some agents will definitely fail to perform Q if she performs P (Jackson and Pargetter’s famous example of Prof. Procrastinate (ibid.) can be interpreted this way). According to the above account, P & Q is not an option for these defective agents and hence not obligatory for them even though it is for normal agents. This implication might seem to be counterintuitive. However, on reflection, this is not really a big problem, I think. After all, by hypothesis, these defective agents will definitely fail to perform P & Q—the probability of their performance is absolutely zero—so what is the point of obliging them to perform P & Q?

11  Making this statement, I suppose that objective probable consequence or expected consequence is the relevant sense of consequence. According to such a view, actions, the optimal action and its part or prerequisite in particular, have different consequences whenever they might (objectively, or expectedly) lead to different results. If you take actual consequence to be the relevant sense of consequence, as far as the action and its part or prerequisite actually lead to the same results, they have the same consequences. So the above statement is not quite right. However, because what would actually happen if the action were performed and what would actually happen if its part or prerequisite were performed can differ, the action and its part or prerequisite can have different consequences. In the above example, if what would actually happen if you picked up the child can differ from what would happen if you entered into the sea, they have different actual consequences, which leads to evaluative conflict. So whatever account of relevant consequences you accept, the problem of evaluative conflict between actions remains.

12  Zimmerman’s account includes the proposal to the effect that the options be limited to what the agent can intentionally perform (Zimmerman, Michael J., “The Relevance of Risk to Wrongdoing”, Chapter 9 (pp.152-170) of The Good, the Right, Life and Death: Essays in Honor of Fred Feldman, McDaniel, K. et al. (eds.), Aldershot : Ashgate, 2006, p.162f). Can one intentionally pick up the drowning child? If she can, then this proposal itself does not eliminate evaluative conflict between actions. The more basic problem is that, because the success or failure of (intentional) performance is a matter of higher or lower probability, it is unreasonable to distinguish qualitatively between what one can intentionally perform and what she cannot (see section five on this point). I will discuss the other, more basic aspect of Zimmerman’s account in section four.

13  I take trying to do A to be more than intending to do A. That is why you might fail to try to do A while intending to do A. So limiting options to trying still leaves the possibility of performance failure and hence the problem of evaluative conflict for Utilitarianism. If trying to do A is identical with intending to do A, you cannot fail to try to do A while intending to do A. However, below I present objections to limiting options to the decisions or intentions of an agent, which will then be applicable to limiting options to trying, too.

14  Don Hubin points out to me that we cannot observe an action when the action involves a certain intention. For example, it seems that lying entails a certain intention ―something like the intention to make the addressee believe what the utterer takes to be false. Because we cannot observe such an intention directly, we cannot directly observe the whole action of lying.

15  You can get the same result if you adopt possibilism, the view that the deontic status of action depends not on whether it would lead to the best consequences but on whether it could do so (possibilists include, for example, Feldman and Zimmerman). This is because, when action could lead to the best consequences, its part or its prerequisite could also lead to the same consequences (as far as the whole action is possible).

16  Possibilism has the same problem, too. In the above situation, not only picking up the child could lead to the best consequences, but also entering into the sea could do so. Thus, you ought to enter into the sea, perhaps fully realizing that both you and the child will almost surely die. Jean-Paul Vessel realizes this problem and suggests an alternative to (pure) possibilism, but that proposal only addresses the cases where the agent could act optimally but will act suboptimally due to his fault, and it requires the nonstandard construal of relevant counterfactuals, which appears to be ad hoc. See Vessel, Jean-Paul, “Defending a Possibilist Insight in Consequentialist Thought.” Philosophical Studies 142, 2009, pp.192-194.

17  Now I am not criticizing setting an arbitrary threshold for successful performance rate as part of decision procedure. Suppose, for example, a commander finds out that the enemy is about to make a surprise attack. Then it might be a good idea for him to put such a threshold in decision making so that the army takes some ―perhaps not the best but decent― action quickly to be prepared. I am only criticizing the theory of justification ―the theory that tells us what makes a deontic statement correct― that puts up an arbitrary threshold for successful performance. For the reasons stated in the text, it is implausible to think that such a threshold partly determines what one ought to do.

18  Arbitrariness and ignoring the cost of failure are problems even for other more sophisticated proposals. For example, consider the proposal, inspired by Justin D’Arms, that the threshold of successful performance becomes lower as the consequences of the action in question become more important. There is no non-arbitrary way to determine the rate of the inverse proportion. Further, this proposal still fails to count the risk and cost of failing to perform the action.

19  I have in mind those who take relevant consequences to be objective probable consequences or expected consequences.

20  See Kavka, Gregory S., “The Toxin Puzzle”, Analysis 43(1), 1983, pp.33-36. The situation is as follows. There is a toxin which, upon consumption, will make you violently ill for a few hours. You will receive a sizeable lump of money if you are able to intend, at present, to drink this toxin tomorrow. If the intention is truly and successfully formed, you will be awarded the money without ever having drunk the toxin. Whether you fulfil your original intention will not change your reward.

21  If more than two pairs of action and intention satisfy these conditions, all of those actions (as well as their parts and prerequisites) are merely right and not obligatory. If only one pair of action and intention satisfies these conditions, the action is obligatory as well as its parts and prerequisites are.

22  Again, if more than two intentions have the best consequences, all of them (as well as their parts and prerequisites) are merely right and not obligatory.

Haut de page

Pour citer cet article

Référence électronique

Makoto Suzuki, « Utilitarianism and Evaluative Conflict between Actions »Revue d’études benthamiennes [En ligne], 9 | 2011, mis en ligne le 15 septembre 2011, consulté le 14 octobre 2024. URL : http://journals.openedition.org/etudes-benthamiennes/438 ; DOI : https://doi.org/10.4000/etudes-benthamiennes.438

Haut de page

Auteur

Makoto Suzuki

Research Fellow, Nanzan University Institute for Social Ethics

Haut de page

Droits d’auteur

Le texte et les autres éléments (illustrations, fichiers annexes importés), sont « Tous droits réservés », sauf mention contraire.

Haut de page
Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search