What does ‘good’ mean? An overview of some meta-ethical approaches

The above question should be very easy for moral philosophers to answer. After all, ‘good’ is one of the most fundamental terms used in ethical discussions, and it is generally thought to have a very clear meaning. Unfortunately, this is not the case (at least according to the academics). Serious philosophical investigation into what the term means only really began at the beginning of the twentieth century, with Moore’s Principia Ethica; until then, many people had tried to define which things were good, but few had actually dealt with what ‘good’ itself was. After Moore, however, the discussion has become central to moral philosophy, and the branch known as meta-ethics stems largely from Moore’s attempt to define ‘good’, and the disagreement of other philosophers.

As it started the entire debate, it seems reasonable to examine Principia Ethica first. In it, Moore claims that ‘good’ is a ‘simple, non-natural property’, and because of this is indefinable. This is quite a radical claim to make, as it suggests that not only is there no way of objectively showing what we mean by the term ‘good’, but also that the property has some how been invented by human beings. Reconciling these two views seems a hard task. Indeed, the praise with which Moore's work was received can at least partially be attributed to the originality of his argument. Of course, not everyone agreed that Moore had really solved the major problem, and MacIntyre is typically short with Moore: ‘more unwarranted and unwarrantable assertions are perhaps made in Principia Ethica than in any other single book of moral philosophy’ (MacIntyre, 250). This is an extreme comment, whether it is justified or not, and even MacIntyre acknowledges that the central issues of meta-ethics would not have been raised were it not for Moore.

Moore begins to explain his definition of ‘good’ with the open question argument. He uses this to show that any naturalistic definition of what is good cannot be the same thing as a definition of what good actually is. Any naturalist theory will identify ‘good’ with some other natural property, common examples being pleasure, happiness, or wealth. For Moore’s argument, it does not matter which of these definitions is given, but Moore concentrates on the view that all good things are those which increase the amount of pleasure in the world, a broadly hedonistic view. He points out that, if pleasure is good, then asking ‘Is pleasure good?’ would make no sense. It would be equivalent to asking ‘Is good good?’. Clearly, ‘there is no meaning in saying that pleasure is good unless good is something different from pleasure’ (Principia Ethica, 14).

This brings us to what Moore termed the naturalistic fallacy. Simply speaking, the fallacy is that good can be defined in terms of some other natural property. No matter what property is substituted for ‘pleasure’ in the above example, the result will be the same. We are now some way to seeing why, in Moore’s definition of good, he insisted that it be non-natural. Non-naturalism, then, is the belief that ‘moral terms mean, i.e., refer to, non-natural properties which can only be apprehended by moral intuition’ (Hudson, 66). Further discussion of moral intuition will be dealt with shortly, but first it remains to see why Moore considered ‘good’ to be not just non-natural, but also indefinable.

Moore identified three types of definition in Principia Ethica. Firstly, stipulative, that is, an arbitrary definition given by a particular individual. Second, definitive, that is, a definition given by a dictionary or other objective reference, consulted by individuals to understand how a word is used. The third type of definition is those that ‘describe the real nature of the object or notion denoted by a word, and… do not merely tell us what the word is used to mean’ (Principia Ethica, 7). It is this third type of definition that Moore believed we could not give of the term ‘good’. The reasoning behind this is that to describe the ‘real nature’ of an object, it had to be broken down into its consituent parts. This means that only complex objects can be given this third sort of definition, and simple concepts or objects, such as good or yellow, can not.

It should now be clear how Moore arrived at his view of the term ‘good’. Needless to say, many philosophers found fault in his reasoning, or in the very questions he was asking. The first fault in Moore’s arguments appears to be its lack of flexibility. The term ‘good’ is used with amazing frequency, and used in all sorts of situation and applications. Can a word that has so many uses really be describing one ‘simple’ concept? This is not as much of a problem as it first appears. As Nowell-Smith points out, ‘it is possible to understand what ‘good X’ means without knowing what the criterion are’ (Nowell-Smith, 86). Presumably, because ‘good’ means the same thing regardless of the context. It is worth considering, though, that ‘good’ also has some context sensitive meaning. To say ‘this is a good knife’ would usually imply something about the sharpness of its blade, but there are situations, such as in a museum, where sharpness would be irrelevant, and the ‘goodness’ of the knife would be due to its typical design, or some aspect of its heritage. There is also a problem for Moore when there is disagreement over an object's goodness, which brings us to his intuitionism.

As Moore claims that good is ‘indefinable’, and yet accepts that it is part of our language and that everyone can use it coherently, he goes on to claim that we recognise the good by intuition. This claim does seem rather unfounded, and quite a jump from normal arguments. As MacIntyre plainly puts it: ‘how, then, do we recognise the intrinsically good? The only answer Moore offers is that we just do’ (MacIntyre, 252). If this is the case, and our upbringing plays no part in our conception of good, surely there should not be so much disparity between various different cultures’ conceptions of the word? Ignoring the obvious problems of having any concept as an innate idea, Moore must give some explanation of why there are so many disagreements concerning what is good, or retract to saying that every individual has a different intuitive notion of what good is. Unfortunately, that would leave the concept no longer ‘simple’, as for different people it would have a different content, which could be compared and analysed like a complex concept.

Another problem with the intuitionist view is that it views language as a ‘coinage of permanently fixed values’ (MacIntyre, 254). The truth is that language simply does not work in this manner, and terms constantly change their meaning over time. Moore must then either claim that the ‘simple, non-natural property’ is constantly changing, or that it remains constant, and that we constantly move nearer towards an understanding of it. If either of these explanations is correct, it appears that complete pluralism is the only possible option, as even those who consider murder to be intuitively good may, in fact, be right. The intuitionist theory does seem then, counter-intuitive (for want of a better word). In fact, if it was true, Principia Ethica should surely not have become as famous as it now is, as the intuitionists are, ‘on their own account, telling us only about what we all know already’ (MacIntyre, 254)

Aside from its intuitionist outlook, the most serious objection to Moore’s views is that they provide no incentive to action. It seems, intuitively almost, that when we use the word 'good' we are recommending it as a course of action, either to ourselves or to others: ‘Attributions of goodness appear to have a conceptual link with the guidance of action’ (Fin de Siecle Ethics, 117). As Moore completely neglected this feature of the term ‘good’, many of those who admired what he had attempted to do then began work on ethical theories that included action-guidance as part of the concept of good. Two of the most important of these theories were the emotivist and universal prescriptivist theories.

Emotivism, primarily suggested by Stevenson, attempts to make up for the lack of action-guiding significance in Moore by claiming that all moral judgements are expressions of our own emotions. So, when we say ‘X is good’, we are really saying something like ‘X makes me happy’ or ‘hurrah for X’. However, this reformulation of the original statement still does not seem to force others into action. It certainly obliges the speaker to perform action X, given that the speaker wishes to be happy or at least do those things that make them feel pleasant emotions. The problem still remains though, of why others should be moved to action by an individual’s claim that X is ‘good’.

There are also other problems for emotivism. It dos not offer any account of how we form our moral opinions in the first place, before we suggest them to others. Intuitionism is a possible solution, but as we have already seen, there are several problems with it. Another problem lies, again, in the multiple uses to which the word ‘good’ is put. As MacIntyre points out, ‘Emotivism… does not attend sufficiently to the distinction between the meaning of a statement which remains constant between different uses, and the variety of uses to which one and the same statement can be put’ (MacIntyre, 259). Clearly, although emotivism does explain some features of the word ‘good’ that Moore had missed, it still does not adequately explain the concept.

Hare gives what is probably the most modern account of the meaning of ethical terms, with his universal prescriptivism. His theory is based on the concept of choice, and the idea that when we commend something as ‘good’ we are suggesting to others that they do that action, as we believe everybody should. Hence, it is a ‘reissue of the view that behind my moral evaluations there is not and cannot be any greater authority than that of my own choices’ (MacIntyre, 262). The emphasis on choices, as opposed to emotions, does make the theory seem more tangible, as people are generally more happy dealing with their choices than with their emotions. The theory also does adequately explain how it is that the word ‘good’ is used to compel others into action. It does, however, lead to quite an individualistic morality. Although each person chooses according to a Kantian universability principle, there is no reason to believe that everyone will make the same choices. This is beside the point though, as far as defining ‘good’ is concerned. Even if everyone is using ‘good’ to recommend different courses of action, they are at least using the term in the same way.

Having looked at a few theories concerning the meaning of ‘good’, it becomes clear that each of the theories have their own strengths and weaknesses. Moore’s account has several positive aspects, and using it certainly goes a long way to simplifying artistic appreciation, but the gaps within his theory, most notably the lack of action guiding significance, and the lack of a full explanation of how intuitionism works, leave the theory incomplete. Moore is still the most important figure in the debate though, simply because he started it. Since Moore, there have been many twists and turns to try and accommodate all the uses and subtleties of the word ‘good’, but to my mind, none of these approaches have adequately explained the complete concept. Perhaps this is because the word ‘good’ is an, ‘ordinary, non-technical word and it is a consequence of this that the logic of its use reflects empirical truths that hold only for the most part and admit of exceptions’ (Nowell-Smith, 88). An intuitionist account is certainly tempting, as in our everyday lives we really do seem to ‘just know’ what the term means. But unless some further explanation is given of how intuitionism works, and why it is that people’s intuitions vary, the theory cannot really be justified except on the grounds of simplicity.

Perhaps all of the meta-ethical theories are, to some extent, correct. Perhaps the concept of ‘good’ is so deeply ingrained in us that we cannot give a fully rational explanation of its meaning. This should not, however, cause any panic, as language is not a precise or rational tool. It is entirely possible that the term ‘good’ has developed in such a way as to contain many contradictions among its various uses, but as communication is still possible, the contradictions must be such that we can get by with them still in place.