Double translation is a method of language learning, interpretation practice, and text translation proofreading, which has proven effective for centuries in human use, and which is increasingly becoming important for machine learning and online translation software such as Google Translate.

The method of double translation is simple, albeit time consuming. First, the translator will break down a sample text into individual words, and translate them one word at a time from the source language into the target language. Then the translator will break down the translated version in the same manner, translating it back to the source language, from the target language, one word at a time. If this twice-translated result is verbatim identical to the original sample text in its source language (or is at least an excellent paraphrase of the sample text), then the target language translation is considered to be an adequate literal translation.

This process is sometimes repeated an additional time, operating at the phrase and sentence levels, not just at the single word level. If the resulting text is as adequate a translation (or paraphrase) as it was at the word-specific level of translation, then the result is considered to be an adequate idiomatic translation.

Literal and idiomatic translations of the same text can be quite different at face value; the RSV and NIV translations of the Holy Bible are both good examples of double translated, respectively literal and idiomatic English translations from the same original Greek and Hebrew source texts.

Double translation is especially relevant in recent times, in translation software like Google Translate and Babelfish, both of which are notoriously poor at remaining self-consistent during attempts at double translation. If a piece of text is entered into Google Translate in Latin, and its English translation is reentered into Translate, and re-translated back into Latin, it will typically bear no resemblance to the original Latin script, and it will often be complete gibberish. The adequacy of a language software is increasingly being judged by how well it accomplishes double translation.

Currently, a variant of double translation called parallel corpora is used to teach machines how to translate a given language, through the use of at least two other languages which the machine already "knows." The machine is given one source text in the unknown language, and it is given double-translation-accurate copies of the same source text, in two "known" languages. The machine learns to cross-reference data from the known languages, to determine what each individual word from the new language means. This is a way to work around homophones and words with multiple meanings in a given language, such as "bank" and "hit."

Iron Noder 2017, 3/30

Log in or register to write something here or to contact authors.