Double articulation is a fascinating linguistic phenomenon because it explains how language has the ability to describe everything in the world, and because it can be applied to other fields of knowledge as well.

The double articulation of language

According to linguist André Martinet (1), language can be broken down in smaller elements on two levels:

  1. First, a sentence can be broken down into minimal meaningful units called morphemes. Minimal meaningful units are usually words, or parts of words. For example, 'bigger' contains two monemes: one for 'big', and one for 'more'.
  2. Second, a moneme can be further divided in minimal phonological units, which have no meaning. Minimal phonological units are called phonemes, and they often correspond to letters in English, but not always. The moneme 'letter' contains 6 letters and only 4 phonemes, because 'tt' is pronounced as one t and "er" is usually pronounced as one sound (2). On the contrary, "axis" contains 4 letters and 5 phonemes.

What is powerful here is that this two-level system allows for expressing every imaginable nuance and meaning using a very small set of sounds (English has about 40 or 50 phonemes). The second articulation combines these sounds to form thousands of different words, which have a meaning but produce nothing: if you say chicken to someone, he will learn nothing and wait for you to say something more. The first articulation combines the meanings of these words to build an infinite number of sentences, which do have an effect: if you say Why did the chicken cross the road?, your interlocutor will understand you and give you an answer.

Compare this to the code of road signs. Here, one sign has exactly one meaning, and cannot be articulated in smaller or bigger units. This code is very easy to learn and read, but its power is extremely limited. This is a good thing, because I don't want a road sign to look like it's been written by James Joyce.

Computer languages

Semioticians have tried to apply the theory of double articulation to other kinds of communication, such as movies or painting, with little success. What I find surprising is that few people tried to apply it to computer languages. Consider the following statement:

    printf("Hello, world");

Humans and computers will, consciously or not, read it in a two-level procedure:

  1. Combine letters to form words, or tokens (second articulation):
      Hello, world
    This is the lexical analysis. Compilers usually perform this step with 'lex'. The letters belong to a fixed set (ASCII), and the tokens have a well-defined meaning ('printf' is documented as a formatting function, the double-quote token is a string delimiter, etc), but they have no effect by themselves: a computer will do nothing if you just say 'printf' to it.
  2. Then the tokens are combined into statements or expressions (first articulation). This is the syntactical analysis, which is performed by 'yacc'. The resulting statement will be understood by the computer and will produce something (the message will probably be printed on the screen), just like human sentences.

So the computer language designers spontaneously created double-articulated languages. Even more, although they probably had no knowledge of Martinet's theories, they invented compilers that followed exactly the two-step procedure of double articulation.

They were so successful that nobody found anything better since: despite all the research that has been done on computer programming languages in the last 40 years, double articulation is still the basis for the most recent programming languages: see Java or C#.

(1) André Martinet, Eléments de linguistique générale, Colin, Paris, 1960.

(2) Thanks to Cletus the Foetus for pointing out an error here. I thought "er" were two phonemes because I am not a native English speaker, and I haven't mastered all the mysteries of English pronunciation...

Log in or register to write something here or to contact authors.