Utilitarianism can be traced, according to some1 , to Plato’s Protogoras. It is a moral theory based upon the attainment of a specific end, normally happiness, pleasure or absence of pain. Utilitarianism is based upon a Utility Principle, which defines what makes an action the right or wrong action.

J. S Mill in his essay “Utilitarianism” sets up the principle of utility thus: “Utility…holds that actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness”2. In this statement he is explicitly not referring just to the happiness of the individual actor, but the aggregate sum of happiness.

Immediately, it seems reasonable to ask what motivates this principle as the basis of our moral code. Mill points out that “questions of ultimate ends do not admit of proof”3 . Happiness is uncontroversially admitted as a good, desirable state. As all mankind desire happiness, the best way of achieving this is for everyone to strive to maximise happiness wherever possible.

How would this principle be applied to decision-making? Most actions will have an effect on global4 happiness. Say an action A will increase global happiness by 50 units, B will increase global happiness by 100 units, C will have no effect, and D will decrease happiness by 10 units. Action B is the best, the most right action to perform. While A would not be wrong, it would be less right than B, but still more right than C or D.

For simplicity’s sake, we will consider the ‘most right’ action to be the right action in any given situation. The formulation of our utility principle will be something like this:

UP: An action is right just in case it is the action that causes the largest possible increase in global happiness from those actions available.

A brief interjection is necessary here to explain what is meant by happiness. Mill sees happiness, in this sense, as “pleasure, and the absence of pain”5 , so both causing pleasure and limiting pain are the ‘right’ thing to do.

A major problem with the Utility principle is that its application is only possible if the state of pleasure and pain can be compared from one person to another. Without entering into a full discussion of the merits of various theories of mind, it seems at least possible that pleasure and pain are fundamentally non-comparable between different people. While it is certainly coherent to say that my left hand hurts more than my right hand, it may not mean anything to claim that my broken arm hurts more than your broken leg. If this is true, the Utility Principle would be incoherent.

If it is granted that one person can be experiencing more pleasure etc. than another, the next problem to raise its head is that of the epistemological difficulties the calculation brings with it. How can someone possibly determine the relative pleasures gained from, say, playing loud music for a party of ten people on a Saturday night?

This objection does not render the UP useless. It does, however, limit its effective execution when faced with a difficult case. Faced with the choice of putting out my cigar in an ashtray or my hand, ceteris paribus, UP tells me to avoid my own pain. If the choice is between someone else’s hand and an ashtray, UP again suggests the ashtray. If my only available stubbing options, however, were my hand or someone else’s hand, I would be much more of a problem. I would have to assess which of us would feel most pain at this action, which would practically be impossible, although I may decide that the pain of guilt from hurting someone else would swing it, and burn myself. If instead of myself I had to choose between two people to burn, UP is even less useful.

But perhaps this is a very minor problem. In the above case, our own intuitions about what is the right action are unclear too. The UP seems to be generating the answers that we apply in everyday life, including the unclear cases.

There are, however, cases where the UP seems to generate answers that intuition would say are wrong. The most obvious example is murder. There seems to be nothing wrong with murdering unhappy people, provided it upsets nobody else; in fact, it may be a moral requirement to murder those homeless people who are unhappy, and have no family so would not be missed. This certainly seems problematic.

Other cases are easy to generate. For example, public torture for amusement, like the Roman practice of feeding people to beasts in the circus, is also a moral obligation.

The Utilitarian may say at this point that the principle would be of little use unless it sometimes generated unexpected results; that the above cases are true, happiness-maximising situations and are true. It is our expectations that are wrong. But allowing these cases in our moral code seems so counter-intuitive that the objection still seems valid.

There are deeper epistemological problems, relating to the formulation of the UP. As it stands, the above UP only refers to what sorts of actions are right, rather than how people should act. Every action has a multitude of consequences heading out into the future, and there is therefore impossible to determine the amount of happiness an action will cause indirectly for thousands of years. In other words, there is no way of knowing how much happiness an action caused until the end of time. At this point, it would then be necessary to evaluate all the counterfactual actions that could have been performed and determine if they would have produced more happiness. This is epistemologically impossible, so the moralist will never know what to do to maximise happiness.

It may be possible to develop a case where the right thing to do can be definitely determined; for example, if a person Z has 2 choices :

1) extinguish all life by releasing a flesh-eating virus that guarantees a painful death for everyone

2) extinguish all life by releasing a pleasure virus that guarantees an orgasm and then sudden painless death for everyone

Then action 2 appears to be the happiness-maximising action. But this is a special case because both actions are ‘endgamescenarios, where after the action and its known consequences, there can be no more happiness-affecting events. If action 2, instead of releasing a pleasure virus, was

2) have a beer

Then it is be no means clear which action is happiness maximising. Maybe if Z has his beer then in ten years time someone will release a more painful flesh-eater that wipes out the entire (and larger) population in a worse way. So for every real case, the epistemological problem remains.

A Utilitarian response to both the murder and torture examples and the epistemological problem is Rule Utilitarianism (RU), as opposed to the Act-Utilitarianism (AU) we have been discussing. RU is sometimes seen as the older of the two6 , and requires a Rule Utility-Principle

RUP: An action is right just in case it conforms to a set of rules which universally followed will tend to generate more happiness then any other set of rules.

This principle will tell us not to murder tramps, because if nobody murders, that is happiness maximising. It will say a similar thing about torture, prosecuting the innocent, keeping promises et al. It also means that the utilitarian doesn’t have to sit in confusion trying in vain to find the action that is certainly right; rather, he needs only to learn the rules.

But if an action appears to be happiness-maximising (like tramp-killing or public torture), then the rule preventing these things is working against the general principle of Utility. J. J. C Smart notes that in these, “to refuse to break a generally beneficial rule in those cases in which it is not most beneficial to obey it seems irrational and to be a case of rule-worship7.

Stronger still is the problem of divisions; how fine-grained should the rules be? If there is a specific rule for every situation, then our RU has collapsed back into AU. Smart suggests that the only plausible rule in an RU system is “maximise probable benefit” 8, the basic Act-Utilitarian position.

Rule Utilitarianism also fails to rescue Utilitarianism from epistemological barriers. In order to formulate any kind of general rule about happiness-maximisation, the same consequence and counterfactual assessments will need to be made as the Act-Utilitarian.

One further claim against utilitarianism is that it appears to fail to account for responsibility. If, when trying to cause large-scale unhappiness (like adding Cholera to London’s water supply) a person accidentally causes happiness (like adding Vitamin C to the water by mistake), then she has done the right thing as much as someone who set out to protect London from scurvy. Both would be equally praiseworthy, as their actions were identical.

On closer inspection this may be a misnomer. All the UP tells us is what the right action is. What it doesn’t do is have the added premise “People should perform the right action”. Perhaps the action-premise should be “people should try perform the right action”, or “people should do the thing that appears to them as the right action”. This would allow blame and praise to be attached to people, and puts the emphasis on effort to conform to the UP, rather than merely conforming by chance or accident.

Now, this admission creates a new principle above the Utility Principle; it sets intent as more important than results in evaluating a person’s actions. Arguably, this new combination is incoherent, or at least totally uninformative; it is equivalent to saying :

“The right thing for a person to do is to try and do the right thing.”

Of course, the problem of praise and blame is not unique to utilitarianism; it is equally valid in any ends-based morality. It is not a fatal blow to any of these theories, merely a counter-intuitive observation. Rather than biting the bullet, it seems best to side-step the whole issue of intent.

Posing now the question “Is it reasonable to settle moral questions by reference to the Utility Principle?”, two assumptions that underlie the principle itself need to be made clear. First, that happiness is the ultimate goal of humanity. Second, that it is metaphysically possible to compare pleasure and pain states between different people.

Granting those two assumptions, there are the major epistemological problems that come with any practical application, which are so massive as to render any attempt to find the right action no more than a random guess. And even ignoring these problems, there are still the uncomfortable counter-cases sanctioning murder of unhappy people, and anything that enough people enjoy.

A Rule-based Utility Principle can skip the epistemological problems, but they are merely shifted from the moral actor to the rule-maker. And the rules of the RUP can ban murder etc. on the grounds that such actions are normally happiness-diminishing. But if the purpose of the system is to maximise happiness, it seems ludicrous to follow a rule that prevents just that. And there is the tendency for rule systems that are too general or too specific to collapse back into simple Act-Utilitarianism.

So it appears to be difficult to say that the Utility Principle is a reasonable way to settle moral questions. Even if it is reasonable to use any form of UP, it is certainly impractical and – in all but the most bizarre cases – impossible.


1 Mill, J. S “Utilitarianism”, p1
2 ibid. p6
3 ibid. p32
4 In the sense of total, rather than anything to do with the Planet Earth
5 Mill, J. S “Utilitarianism”, p6
6 Brandt, R “The Real and Alleged Problems of Utilitarianism”, p373
7 Smart, JCC, excerpt from Utilitarianism for and Against under the title “Act-Utilitarianism and rule- Utilitarianism”, p371
8 ibid. p372

Written for my philosophy degree, this essay was my first "First" mark.