A verbose history of the origin of the use of != to mean "not equal"

Our story begins in the early 1960's.1 At the very beginning at least, it's the story of two different and competing character sets: ASCII and EBCDIC. You see, in 1963 the American Standards Association came up with a little something called the American Standard Code for Information Interchange. It would be mildly naive to simply attribute the code to a committee at the ASA because, you see, they had plenty of input from the industry when the conceived it. One of the companies involved was AT&T, who needed a character set and had the financial resources to ensure that whatever they came up with would become the de facto standard for a large portion of industry. Another company of note involved in this process was IBM. Now, for whatever reason, IBM released2 the first version of EBCDIC (Extended Binary Coded Decimal Interchange Code) in 1964. Then, in 1965, the ASA released a new version of ASCII, complete with a few revisions that SHARE (an IBM usergroup) insisted on. Added to ASCII-1965 was lower case letters, ~, ^, _ and @.

Now, we get to a particularly contentious character: '¬' (also known as "hooked overline", "turnstile" or plain old "not".) If you ask most logicians or mathematicians, they'll tell you that this is the symbol for negation. If you ask most computer scientists what '¬' is, they'll agree that it's a symbol for negation. Here's where it gets fun. If you reverse the question and ask a programmer what the character for negation is, they'll most likely answer '!'.3 How did that come about?

Back to the story. This is the part where we get tied up into the development of Multics at Bell Labs.4 Multics was written in PL/I which was created by IBM. ASCII was used for Multics and whichever version of EBCDIC was handy was used for PL/I (because...well...IBM could.) A decision needed to be made. ASCII didn't have a '¬' but ASCII-1965 had added '^'. And besides, if you rotate '^' a little bit, it looks vaguely like '¬', right? That was the compromise made on Multics. To put it in a way that's more confusing, '¬=' == '^='.

Now, the machine that was used to develop Multics was a GE 600, which brings in even more complications on top of the EBCDIC vs ASCII thing. Here, we've wandered into the realm of punched cards.5 On the GE 600, the character '^' is represented by having a hole punched in row 11, 2 and 8 in the appropriate column (11-2-8 for the punched card savvy.) In order for a programmer to enter a '^' into the machine, they had to feed it a punch card . But what was used to actually punch cards in the late 1960's? Why, the IBM model 029 keypunch, of course. The 029 keypunch would punch you a column that would be read just fine by the GE 600 as a '^' character...but only if you pressed '!'. Are we a bit closer?

This is the bit where I tie BCPL into this whole mess. When the specification for BCPL was written, it was done on a type writer. The representation of "not equals" was done by typing an '=' and then overstriking it with a '/' (this will become important in a minute). This meant that anyone implementing the language on an actual machine had a great deal of freedom in choosing how to actually represent "not equal" in an actual character encoding scheme. In Martin Richards original implementation of BCPL, done on an IBM 7094 with 6 bit characters and no '!' at all, "not equal" was represented by the reserved word "ne". Later on, BCPL allowed "~=" and in some cases, even "/=".6 The version of BCPL (derived from Martin Richards' compiler) in use at Bell Labs in 1968 used '!='. This modification, made at Bell Labs, must have happened some time during 1967 or 1968. Now we're getting somewhere.

When it was decided to use "^=" in PL/I, it was because '^' was relatively close to ¬. With BCPL, the character that was being imitated was '=' overstruck with '/'. It's possible that "!=" was chosen because:

  1. There wasn't a currently existing use for '!'.
  2. It kinda, vaguely, not quite but maybe a little looks like a '=' overstruck with a '/'.
  3. B included things like "=+" and other symbols that compound an assignment operation with an arithmetic operation (generalized assignment operators). One of those was "=/", so any character which also represented an operation would be confusing to use.
I should note that this is the theory Dennis Ritchie offered.7 It's also possible that the 029 key punch to GE 600 computer translation factored into the decision at the time.

Regardless, the use of "!=" in AT&T's BCPL implementation stuck when B was created. And from B it carried on to C. And once it was in C, well, the rest is history.

Firstly, I owe many thanks to Doug Jones, Dennis Ritchie and Martin Richards for being invaluable while researching and writing this.

1. Much of the information near the beginning, on character sets, comes from a handful of webpages:
2. The word "released" has connotations of making it available to the public. I am assured that it was released in a way that would have make Douglas Adams proud, complete with a panther. It should be additionally noted that this was merely the first EBCDIC. There were many more, which were often incompatible with each other.
3. Yeah, yeah. there's a lot that wouldn't. But I'm making generalizations, here.
4. GE and MIT were also involved here, but my information is specific to Bell Labs.
5. All of the information on punched cards is from http://www.cs.uiowa.edu/~jones/cards/codes.html
6. Martin Richards, the designer of BCPL, claimed in an email that even if he had had '!' at the time, he would have used it for "monadic and dyadic indirection since it is the character that most closely represents a down arrow to indicate that it right hand operand is a subscript." I should also note that Martin Richards was wonderfully nice about answering my questions.
7. I sent Dennis a polite email and he filled me in on a few spots, like "^=" and his guess as to where the ! in BCPL and B came from. He was also wonderfully nice about answering my questions.