The principle of utilitarianism, succinctly summarised, is that one
should always aim to bring about the
greatest happiness for the
greatest number, with the minimum amount of pain or general
unpleasantness. This is the theory argued by
Socrates in the dialogue
'Protagoras', and defended by
John Stuart Mill in his essay
'Utilitarianism', as the ultimate moral principle.
I
shall be examining the problems with utilitarianism, and questioning
whether Mill, in spite of his spirited defense of the principle,
genuinely did 'regard utility as the ultimate appeal on all ethical
questions’.
So. Utilitarianism: great moral theory or bunch of arse?
I shall be arguing for the latter.
The
basic problems with utilitarianism: Is happiness really the only good?
How can we weigh one person's happiness against another? And how does
killing fit into all this?
So how can we weigh one
person's happiness against another? What if there's someone who is
capable of feeling really, really, really intense happiness,
just by kicking small children, not even very hard? It would seem that
utilitarianism would support his right to kick such children, since the
amount of displeasure he causes is easily outweighed by his own
happiness. Is that right?
Let us imagine a machine which
stimulates the 'pleasure centre' of your brain, producing a feeling of
constant mind-blowing pleasure, and removing the need for exertion of
any sort by feeding a person through drips. If such a machine is in
fact possible then, in pursuit of the greatest-happiness principle,
oughtn't we attach everyone to it, forcibly if necessary? It might be
contended that even if there was a machine which stimulated the
pleasure centre of our brains, it could never bring about 'true
happiness'. Here we are making a distinction between pleasure and
'true happiness'. What are the grouds for such a distinction?
Boringness seems a prime contender: after a while an experience
approximate to a permanent non-climaxing orgasm encompassing one's
whole body could get boring. Perhaps pleasure just stops being so enjoyable after a
while. But eliminating boredom might not be such a difficult task. It
seems quite probable that with sufficient dulling of one's
consciousness, using the right drugs and/or surgical interventions it
would be possible to eliminate boredom entirely in much the same way
achieved by a goldfish, and keep one in a state of tranquil bliss. Is
it possible for a mind in such a dulled state to be 'truly happy'? And
if not, then why not, and what gives us the right to place the
happiness of the goldfish beneath that of our own?
It
seems already that fomulating the principle solely in terms of pleasure
and pain is unsatisfactory, unless we allow that a goldfish, or a
rabbit, or a human or a whale has each as much entitlement to their own
pleasure as the next. If we talk of "higher pleasures", then what are
those pleasures and who says what constitutes a "higher pleasure"? Is
it not rather presumptuous, even arrogant, to talk of one pleasure as
"higher" than any other?
Rather than straying into the
murky territory of "higher pleasures", it is perhaps preferable to
admit the existence of values other than happiness... but what the fuck??? Freedom? Intelligence? Should we use some kind of cross-multiplication system -
value = happiness × intelligence? Or what?
If
we could control people's desires, dictate what makes them happy in
order that we can be sure to make them so, this would plainly serve the
purpose of utilitarianism; perhaps, as in Aldous Huxley's Brave New World,
it would be possible to eliminate unhappiness from society almost
entirely, at what might be seen as the cost of sacrificing our freedom,
but then are we really any more free just because our desires are
determined undirectedly by our surroundings and genetics than we would
be if they were consciously designed by other people for the
furtherment our own happiness? What does it mean to be "free" anyway?
Mill
argues that "to desire anything, except in proportion as the idea of it
is pleasant, is a physical and metaphysical impossibility." However
this is not the same thing as saying that it is impossible to desire
anything except in proportion to how pleasant one believes it will be.
The question has been asked many times, "would you rather be a happy
pig or a miserable Socrates," that is, would you trade your
intelligence for happiness? Some people would, and some people would
not. Repugnance is relevant here; not everyone finds the idea of a
tremendously pleasant life as a human vegetable a pleasant idea. But do
they necessarily know best?
In his essay 'On Liberty'
Mill says that he believes a benevolent dictatorship to be the worst
possible form of government, in that it inherently reduces the need for
people to think for themselves, but it is easy to imagine a benevolent
dictatorship, as in Brave New World, which serves
utilitarianism better than a representative government ever seems
likely to. It would appear that Mill is allowing for values
unconnected with happiness, despite his defence elsewhere of
utilitarianism. He also argues in 'On Liberty' that society is
only ever justified in interfering in an individual's life to prevent
harm to others. This suggests that society should never interfere to
prevent an individual from causing harm to his or her self, however
much pain could be prevented if it did; this appears to run headlong
against the principle of utility, unless one believes that it is
impossible to make someone happier by interfering with their freedom; I
do not believe that it is.
And what about this: times
when the interests of two (or more) parties cancel out, so that either
way someone's going to be pissed off? Who wins? I reckon I win, but
someone else might disagree. Hopefully, I'm bigger than them anyway so
it doesn't much matter who's actually right.
If the
remainder of someone's life looks like it's is going to be miserable,
should a utilitarian kill that person? And what about if thousands of
people would be ecstatic at the discovery of a certain person's death,
and hardly anyone would be pissed off? Does that make it right to kill
that person? Maybe it does.
This said, is utilitarianism
preferable to selfishness? I would have to say yes, except for in my
case, where selfishness is plainly the better option.
In conclusion: utilitarianism, while better than a poke in the eye with a live tuna, is nonetheless essentially arse.
In
case anyone is wondering, this was sort of a first draft of a philosophy essay I wrote about eleven years ago, in my
first year at university. I don't know what happened to the final draft, but I like this one better anyway.