One of those few things in C that is elegant because it's simple.

You use enum to describe a special user-defined type which can be equal to one of a certain number of predefined things; for example, you have a structure describing an item in an inventory system, and one of the things this structure must describe is the item's status, which you know will always be equal to "in stock", "out of stock", or "back ordered". You could describe the status of the item with a character string, but that would be inefficient both to store and to check up on. So instead what you say is:

enum partstatus {IN_STOCK,OUT_OF_STOCK,BACK_ORDERED};
And suddenly you have defined a type called "enum partstatus", which you can set equal to one of those three things by saying things like enum partstatus mystatus=IN_STOCK;. (In C++, the "enum" in this line would be optional; C++ would recognize partstatus as being short for enum partstatus.) Enums have scope, so if you create an enum type inside a block it ceases to exist outside the block.

Confused yet? If so, these will probably make things worse: First off, you can declare variables of the enum'd type at the same time you define the enum itself, by putting your variable names (separated by commas) after the {} in the enum definition. Second, you can have nameless enums-- enum {A, B, C} oneshot; is legal, although oneshot will be the only variable of that type you will ever be able to define. This particular quirk is mainly useful because it means you can get around the requirement (the one that C++ removed) of prefixing your variable name with enum. Just use typedef enum {IN_STOCK,OUT_OF_STOCK,BACK_ORDERED} partstatus; for your declaration, and you will be able to use partstatus alone as a type.

Enums are actually handled by the preprocessor. Below the surface, what is going on here is that the compiler is typedefing "enum partstatus" as an int type, assigning the first item in your possible values list to 0, and then assigning each possible value after that as the value of the previous item in the possible values list plus one. It then basically #defines (within the block, of course) all the members of the possible values list to their assigned integer values. Got that? So if we were using the example above, you could say something like (OUT_OF_STOCK==1) and it would return true. C++ tries to break this-- in the C++ spec, they for some reason demand that enum be a "special" type, not an integer, and that it be illegal to assign an enum value to anything other than the enum type that defined it. Luckily, no compiler i have ever seen actually obeys the C++ spec in this regard-- they just go ahead and treat enum's as integers.

You can actually assign specific integer values to the items in the possible values list-- the best way i can explain this is that:

enum ordinalnumber {one=1, two, three, seven=7,eight,nine,ten, zero=0, six=6, four=4, five};
would, within the enclosing block, correctly #define the ordinal numbers one through ten as being equivalent to their integer values.

(enum, by the way, is a good way make your APIs very much more clear. For example, if you read through Apple's header files, you'll see they use enum extensively to pass such values; Microsoft's header files, meanwhile, tend to take the route of using #defines and then passing the value as an integer. So if you're looking at a function in an apple header file setPenType(enum kPenType value);, you can easily search and discover enum kPenType(PEN_TYPE_SOLID=5000, PEN_TYPE_DOTTED, PEN_TYPE_INVIS}; in the beginning of the file. Meanwhile if you're in an MS header file and you see setPenType(int value);, you are at a dead end. If you looked in the documentation you would see that PEN_TYPE_SOLID, PEN_TYPE_DOTTED, and PEN_TYPE_INVIS are your legal values, but if you didn't have the documentation, there is no way you would be able to figure out by yourself that #define PEN_TYPE_SOLID 5000, #define PEN_TYPE_SOLID 5001, etc., are sitting in a different header file along with about 200 other unorganized #defines.)

MySQL also has an enum type. You can assign a field's type to be something like ENUM("in stock", "out of stock", "back ordered").

Enumerations, also called enumerated types or just enums are a computer programming concept that occurs in many procedural programming languages.

Enums have made it into most object-oriented languages that evolved out of these procedural languages. C and C++ have them. C# has them. Java does not (update: enums are being introduced into Java 1.5).

In my opinion, in Pascal and its object-oriented offshoot Delphi, enums reach their full potential. The language has some flaws and shortcomings, but the usage of enums is not equalled elsewhere in the C family of languages.

An enum is a user-defined type that allows selection from a small number of listed options. When defining the type, you must enumerate the possible values, in Webster's sense of to mention one by one.

Over using a raw integer value an enumerated type provides type safety, range checking, readability and abstraction.

Confused by that? Let's look at some code, starting with Java. Java's theory is that if a non-object-oriented type isn't a built in, atomic type, it is just another complexity that you don't need, and it should be done via an object type. This theory works, mostly. Enums are the single biggest flaw in it.

The designers of Java decided that enums could be subsumed by their object model. You could do that verbose, expensive kludge, but nobody does that in practice. In true C programmer style they use simple, efficient integer constants. This kind of code is common in the Java class library:

public static final int DATE_PART_MONTH = 3;
public static final int DATE_PART_YEAR = 4;
.. and elsewhere ...
public static final int COLOR_BLUE = 11;
public static final int COLOR_GREEN = 12;

This falls short on type safety: You can do
int myColour= DATE_PART_MONTH; when myColour is not to do with dates.

This falls short on range checking: You can do
int foo = DATE_PART_MONTH + DATE_PART_YEAR; when 7 is not a valid date part.

This falls short on readability. You can do
SomeObject.SetColor(5); when it’s not obvious what colour 5 means.

This falls short on abstraction: Colors are not integers. Sure they are implemented in terms of integers, but this does not encourage a distance between the problem space and the solution space.

Let me show you how enums can work in Pascal, and any other language that has fully learned what Pascal has to offer in this regard (this exludes Java, which doesn't have enums at all, and C, which can’t tell them from ints).

Type
 TColour = (eRed, eGreen, eBlue, eWhite, eBlack, eOrange, ePurple, eBrown);
This defines an emerated type with 8 possible values corresponding to eight colours.

Var
 MyColour: Tcolour;

This declares a variable of that type.

An enumerated type is not an integer, or else you could cause errors by putting in an out-of-range value, or a value from the wrong enumeration.

An enumerated type is not an integer, though it is ordinal. Ordinal? This means simply that like the type integer or char, and unlike the type real or float, it has a finite number of values, and that you can order them, with a first one, a last one, and each value has the notion of a predecessor and successor value (not defined for the lowest and highest value respectively). It is an ordered set.

Thus eRed is not the same as zero, and eGreen is not the same as 1, but Ord(eRed) = 0 and Ord(eGreen) = 1.

eRed + 1 is not allowed, but Succ(eRed) = eGreen.

This gives you type safety. You cannot do MyColour := 1; or MyColour := eDatePartMonth; (assuming that date parts will have their own enumerated type)

Yeah, this is implemented in terms of simple efficient integers, but so are lots of valuable abstractions. The compiler catches the errors, and gives you efficent machine code from source code in the problem domain, not in the level of integers.

With this implementation detail in mind, we can put some limits on enums: if you want the enum variable to fit into one byte, it cannot have more than 256 values. However using a 16 or 32 bit word is more common for performance reasons. This allows an enum to have as many values as you could want. Once you get so many different values that you can't name them all individually, then perhaps an enum is not the best design choice.

Hard, unsafe, type casting is occasionally necessary. 1 is not equal to eGreen , but TColour(1) is equal to eGreen.

There is no need to ever assign numeric values to enums except to interoperate with broken languages that, lacking sets over enums, abuse them as bitmasks.

For instance, some C code is written as enum modes = (MODE_READ = 1, MODE_WRITE = 2, MODE_EXEC = 4 );

A variable would then contain an XOR of these modes, each bit either on or off. This enum is not a list of possible values anymore. It is using your knowledge of integers and bits to shorthand a bitmask.

Let's look at how that would go in Pascal, if interoperating with an OS weren't an issue.

Type
 TMode = (MODE_READ, MODE_WRITE, MODE_EXEC);
 TModeSet = set of TMode;

Var
 MyMode: TModeSet;
Begin
 MyMode := [MODE_READ, MODE_EXEC] ;
 If MODE_EXEC in MyMode then
  ...

it may be true that a set is really just a bitmask with one bit for each possible value of the ordinal type over which it is defined, and thus must be implemented in terms of ints and bitmasks anyway. The machine code will be the same as the C style stuff. True, but set operators are still easier to read, and the compiler gets the easy bugs for you. Set membership may be implemented in terms of XOR, but that doesn’t mean that they are the same thing.

With that implementation detail in mind though, you can see that a set type will be one bit wide for each value in the base type. e.g. a variable of a set type over an enum with 8 elements must have eight distinct flags, and will just fit into a byte. For instance, Delphi specifies that when making a set from an ordinal type, the base type can have no more that 256 elements, and all of these must have ord from zero to 255.

In ANSI C, you cannot rely on a variable of an enum type being the same as an int. The C standard is not publicly available but you can find some information about this on the web(1): ."An enumeration type is compatible with some integral type". This does not necessarily imply int, although some compilers might define it this way. It also doesn't mean that all ints are the same size.

Imagine if all enum types were ints. If ints are 32 bit, then it is impossible to define an enumeration with 64-bit values, for example. There is also the question of signedness - if an enum is an int, is it a signed or unsigned int? If it is signed, you cannot have an enumeration value equal to UINT_MAX. If an unsigned int, you cannot have negative values. Any restrictions like these would limit the usefulness of enumerated types. It therefore makes sense that this is an implementation-defined behavior.

Of course, it is the case that in some compilers, enums are stored as ints. You can't rely on this being the case everywhere, however.

References
(1) http://www.open-std.org/jtc1/sc22/wg14/www/docs/tc1.htm

Log in or registerto write something here or to contact authors.