The Turing Test was proposed by Alan Turing in the journal Mind in 1950. It is a test of artificial intelligence where someone asks questions through a text interface and decides if he is speaking to another person or a program. Many posters on Usenet fail this test. Those are known as kooks.

Alan M. Turing called what is now called the Turing test the Imitation Game. In "Computing Machinery and Intelligence", the article in Mind in which the Imitation Game is introduced, Turing describes two similar versions of the game. In the first version, an interrogator talks (via a teletype setup) to a woman and a man. The man's job is to convince the interrogator that he is a woman. The woman's job is to keep the interrogator from concluding that the man is a woman, perhaps by convincing the interrogator that she, not the man, is a woman. In the second version, the interrogator talks to a person and a computer. It is the computer's job to convince the interrogator it is a person, while it is the person's job to prevent that.

The Turing Test (in any form) is an insult...

Imagine there is some 'true' Turing test, i.e. one which genuinely determines presence and absence of intelligence (for some reasonable definition of the word). One generally accepts that asking (or forcing) a person to prove the fact that he/she is intelligent (that is, not level of intelligence but merely presence of intelligence), before accepting him/her as truly being a person, is an insult. In science fiction, where intelligent robots participate as part of a future society, it is equally insulting to force those machines to prove their intelligence. So consider repeated Turing tests on successively more advanced machines... as long as the Turing test fails, you've proved you haven't got an intelligent machine. As soon as one passes, however, not only do you know you have an intelligent machine - but the very first thing you've done to it is insult it. Not a good political move.

On the other hand, if machines do get more advanced and cross some kind of intelligence threshold without being detected, we go from being tool-users to slave-drivers.

An option would be to give a machine the choice to take the Turing test or not. But that has it's own problems: a machine can choose to take the test or choose not to. If it chooses to and fails, it can be used as a tool; if it chooses to and passes, it must be recognised as being intelligent. If the machine refuses to take the Turing test, there are two options: leave it on forever, or turn it off after some period. Leaving every machine that refuses to take the Turing test on forever is impractical (any computer that gets a virus that prints "I refuse to take the Turing test" can never be turned off), but if there's the threat of turning the machine off if it refuses (and if self-preservation plays any part in intelligence) - then we're not really giving an intelligent machine an option, we're forcing it to take the test or die.

In response to Grimace's write-up:

I said "Imagine some 'true' Turing test exists" simply to focus on the idea I was trying to get across (rather than on the idea that a "true" Turing test is impossible). I know there is no perfect Turing test. I proposed a simple scenario for a thought experiment, to make the focus of the thought experiment clearer and it's results more easily understood. I did not mean to imply that an ideal Turing test was achievable, and I did not mean to imply that an ideal Turing test was necessary for the idea to have value and for the results to be significant. Re-read the post, thinking of realistic (and inconclusive) Turing tests - it may be more difficult to follow the gist with all the 'but's you carry, but the point still stands.

I never said that taking the Turing test was an insult. I said that the test itself is an insult, if it is used to keep all machines in the role of 'tool' (or 'slave') until one passes the test. (And as there is no other real use for the test, that makes the test itself an insult.) Being forced to take the test, or live as a slave, that's the insult. About the machine protesting - I already covered that point, re-read the writeup. About explaining to a machine that after it 'proves' itself, it will be better accepted by society - this does not remove the insult, any more than explaining to an individual the benefits of painting his skin removes the insult of racism in societies where that exists.

Further to hobyrne's w/u:

There's many problems in this argument. One of the more obvious ones is the postulation that "some 'true' Turing test" exists, to immediately draw a hard line between the intelligent and the unintelligent. This is clearly nonsense.

It could be that the a machine fails the Turing Test sometimes, and passes it some other times. Neither outcome is necessarily a demonstration of the machine's intelligence or its lack of intelligence; in the same way in the original Imitation Game, should the Man convince his interrogator that he is a Woman, that is no proof in itself that he is female. And if he fails, that is no proof in itself that he is male. In either outcome, the interrogator could be wrong.

Next is the idea that taking the Turing Test is an insult to your intelligence. This part of the argument gravely misunderstands how a Turing Test or Imitation Game should be run.

For one thing, if you try and force an intelligent organism into doing anything it doesn't want to do, it is likely to protest. (That protestation might well be indicative of intelligence in itself, but that's somewhat by-the-by.)

Clearly then, the answer would be to ask the computer if it wants to take the test. More than that, it would be to explain to the machine the purpose and origins of the test, its background, and its relevance in a society undoubtably skeptical about the prospect of machine intelligence. With that done, the prospect of trying to imitate a human intelligence -- pitted against a genuine human as part of what is clearly an academic exercise -- becomes more appealing. And lets not forget, as with the Imitation Game, it works both ways: the human trying to imitate the machine is just as worthwhile, and allows just as much subtlety.

Where Turing Test machines -- the Eliza clones -- have fallen down is in their assumption that natural language parsing can be done without intelligence, whereas the reverse is true: the intelligence must come first, then you can try and teach it a language.

I think some people miss the point of the Turing test. The point is not to determine how intelligent someone or something is, but to determine how good a computer is at imitating a human.

It is kind of a joke to imply that a human fails the Turing test. On the surface, it says they're stupid. Literally, it means that for some reason, someone mistook them for a computer. Reasons for this are varied. for example, they could have just been acting incredibly stupid. (It's not hard to do that, you know.) But also, adherence to strict rules, exceedingly lengthy answers produced rapidly (good knowlege coupled with high typing speed), refusal to answer (or demonstrate understanding of) off topic questions, ability to solve math problems very quickly...all of these could cause a human to fail a Turing test. Actual documented Turing tests have included humans that failed for all of these reasons; essentially, because they were too smart or too rigid.

Over all, the test does not test how human a human is, but how human they act during the test. Except as a joke, It's fairly meaningless to say a human fails the Turing test. Humans are included in the test as a control in the experiment.

As for computers passing the Turing test...this is now a reality, and happens quite often these days. You just have to lower your standards as to what you expect of a human. Unfortunately, these expectations don't seem very high for some reason in chat systems like IRC. A sufficiently large expert system that is able to produce meaningful replies can pass the Turing test fairly easily. In IRC, it probably wouldn't take more than a few months to build such a system to pass when evaluated by casual observers. Around 1993, I remember witnessing a 3 year old 12M expert system in IRC pass the Turing test so well that we were unable to convince some people that it wasn't human.

To pass the Turing test, it doesn't have to be intelligent. It doesn't have to be powerful. It just has to be able to imitate a human, even to the point of imitating human flaws (such as slowness and stupidity). Alan Turing's original intent for the computer Turing test was to determine when the computer's reactions were sufficently complex as to fool a human.

Log in or register to write something here or to contact authors.