History

The translation of human languages by computer has been one of the holy grails of computing since the machines themselves still had valves, and has as long been a staple of science fiction writing (although to my knowledge only Philip K. Dick got close to the reality of it in his Galactic Pot-Healer). The earliest serious research in the West was spurred on by the needs of the intelligence services during the Cold War, with (unsurprisingly) an emphasis on Russian to English. This research was primarily carried out by computer scientists, who massively underestimated the complexity of the task, assuming that it would be enough to add a few tweaks to a word-for-word lookup routine. By the mid 1950s research was going on in several major universities, both sides of the Iron Curtain, but by 1960 it was already clear to one of the pioneers that the aim of "Fully Automatic High-Quality Translation" (FAHQT) was not likely to be feasible in the foreseeable future, and that research should rather target improvements in human/machine interaction.

In 1966 the Automatic Language Processing Advisory Committee (ALPAC) published an influential report which concluded that MT was slower, less accurate and twice as expensive than human translation; American military funding was cut back and research retargeted on more modest aims. Elsewhere work continued with different objectives; bilingual Canada saw the first genuinely practical implementation of an MT system in a very restricted sublanguage: the Méteo system was used to translate weather forecasts from French to English and vice versa, with more or less 100% accuracy (hindered mainly by typos in the source texts, and probably more accurate than human translators, who found this repetitive work utterly tedious); meanwhile the European Commission with its (then) six working languages installed an experimental system in three language pairs run by SYSTRAN - the commercial offspring of the earliest US word-substitution experiments - for (limited) internal use, and set up an ambitious research project, the Eurotra programme in the late 1970s.

By this stage the interdisciplinary nature of the problem was more clearly recognised and experts in linguistics were fully involved alongside the computer scientists. The problems of knowledge representation also involved some crossover with Artificial Intelligence issues. As well as the purely academic research, companies like Siemens and Philips were becoming involved, and meanwhile the increasing power of personal computers allowed the (expensive and unimpressive) Weidner system to be released for PCs running MS-DOS in the late 1980s. Since then commercial products of variable quality (and quite a wide range of prices) have become increasingly widely available, ranging from specialist products to free web-based translation services like Babelfish (which uses the good old SYSTRAN engine). There have been no major breakthroughs in the translation process itself, but general refinements such as cheaper storage and faster processors have allowed larger dictionaries, while developments in other areas have made things like voice interfaces more of a realistic possibility; the US army has reputedly started investing in a US$75 000-a-go device which will supposedly allow GIs some form of spoken communication with non-anglophones in the field; it remains to be seen whether this is one of the wiser pieces of defence procurement.

How does it work?

As noted, the earliest approaches took translation as a simple exercise in dictionary lookup and basically operated at the word level, an approach which more or less guarantees the generation of gibberish. The next step was to try some basic word reordering - to get adjectives before the noun instead of after them, as would be the case in French, for example. For the last couple of decades systems have generally operated at clause and sentence level, attempting to analyse input sentences into data structures such as those used in transformational grammar. These structures can in theory then either be adapted directly into the forms used in the target language, the lexical items (words and set phrases) looked up and replaced and the output generated directly (the "transfer" approach), or it can be rendered into a language-neutral form (an "interlingua") from which output can be generated; the interlingua method is attractive in that it makes the system more modular and appears to facilitate the development of multiple language pairs, but in practice specific routines for each language pair are almost always necessary. The massive increases in computing power of recent years have also increased the attractiveness of corpus-based methods; it is (faintly) conceivable that some combination of a massive corpus of existing translations and neural networking may provide some kind of breakthrough eventually.

So far, so good. So where are the tricky bits? The fundamental problems mainly come down to resolving ambiguity of one form or another. Human language is full of traps for an innocent machine; much human communication relies on assumed real-world knowledge and awareness of context which will define whether, say, "pen" means a writing implement, a feather or a thing for keeping sheep in - lexical ambiguity - or help us locate the verb in "time flies like an arrow"1 - structural ambiguity, or tell us to what "it" refers in "John gave me a good book. It was by Philip K. Dick." - anaphora, all of which may be necessary for the selection of target language structures and lexical items. It rapidly becomes clear that these problems - in conjunction with the plentiful scope for errors of all kinds in source texts - are non-trivial.

Does it work?

Well, it depends what you mean. In terms of FAHQT, or being able to replicate the work of a half-decent human translator, no. Clearly, literary texts of any sort are a non-starter. It would be highly unwise to use any of the currently available products to generate any text which is to be (a) authoritative or binding in any sense or (b) read by anyone you want to impress, like potential customers. It is a really stupid idea to run your website through some €50 MT package into a language you cannot read yourself and put it on the web as it comes out (although sadly this does not discourage the naive and overenthusiastic from doing so). The situations where MT is useful are those where limited understanding is useful - gisting, to get a vague idea of whether a text in an unfamiliar language is relevant or not before deciding whether to get it translated properly, or where speed is of the essence (a human translator will probably be able to cope with 3000-4000 words per day at most and demand exceeds supply in many language combinations. It should be noted that MT does not generally have a great tolerance for errors in the source material: the worst I have seen involved an attempt to use MT on an apparently uncorrected scanned-in fax. GIGO is guaranteed, although its converse is not. Users also need to realise that the types of errors made by human translators and machines differ; many MT errors are glaringly obvious to a human reader (although some - a missing negation, for example - may not be), while simple typos which would be trivial to a human translator can throw an MT routine completely off the scent. Lastly, a small caveat against an over-used testing method for MT - submitting something in your own language for translation into a target and then back-translating with the same system: the structures generated in the first pass are likely to be close to those of the original source langauge, and will thus retranslate into that language unnaturally well (even if the first translation read like gibberish to a target reader) and lexical items which cannot be found in the dictionary are usually left as is, and will usually come back looking correct even so. A more reliable technique is to find a text in the unfamiliar language on a subject that you know something about and translating it into your own langauge - the sports pages of on-line newspapers are often a good source (babelfishing Italian cycle racing coverage into English produces some real gems).

Performance of MT software can also be improved by human intervention. "Human-aided machine translation" - HAMT - makes use of pre- and post-editing procedures. A human translator can be assigned to clean up a raw machine translation (although this is not a very popular job and the time savings are often minimal in the end) or, more successfully, texts for input can be written or rewritten in controlled language, a carefully defined subset of human language designed to be readily parsed and devoid of lexical ambiguities. This is feasible for texts in tightly limited subject areas such as technical instructions and data sheets. The converse - Machine-aided human translation or MAHT is - apart from the obvious stuff like dictionaries on CD-ROM - best exemplified by the increasingly widespread use of translation memory software in the translating profession.


1. As this is absolutely the canonical example of structural ambiguity in English, it can safely be assumed that any available MT system will have a hard-coded translation for it.

Sources:
W.J. Hutchins and H. Somers, An introduction to Machine Translation
Fading memories of a Masters course I once started at UMIST


ADDENDUM: 2016-01-21

The writeup above is now all but fourteen years old, old enough to at least make a vaguely plausible attempt to get served at a bar in most jurisdictions. Stuff has changed, not least 14 years worth of Moore's Law. However, gratifyingly, especially given that translation is still how I earn my daily bread, MT has still not made the grand breakthrough. FAHQT is still a chimera. However, there is a new game in town. Google Translate and some competitors' imitations have adopted a new approach, moving away from the linguistic analysis mechanisms to a purely statistical approach, taking advantage of the availability of vast and expanding quantities of data, cheap storage and cheaper processing. It is furnishing results that are, if not High Quality, usable in a number of real world situations, particularly in pairs of major languages <> English. The mechanisms are proprietary and in constant evolution. But you still don't want to rely on it for your website, and less still for that million-euro supply contract.