Further to hobyrne's w/u:
There's many problems in this argument. One of the more obvious ones is the postulation that "some 'true' Turing test" exists, to immediately draw a hard line between the intelligent and the unintelligent. This is clearly nonsense.
It could be that the a machine fails the Turing Test sometimes, and passes it some other times. Neither outcome is necessarily a demonstration of the machine's intelligence or its lack of intelligence; in the same way in the original Imitation Game, should the Man convince his interrogator that he is a Woman, that is no proof in itself that he is female. And if he fails, that is no proof in itself that he is male. In either outcome, the interrogator could be wrong.
Next is the idea that taking the Turing Test is an insult to your intelligence. This part of the argument gravely misunderstands how a Turing Test or Imitation Game should be run.
For one thing, if you try and force an intelligent organism into doing anything it doesn't want to do, it is likely to protest. (That protestation might well be indicative of intelligence in itself, but that's somewhat by-the-by.)
Clearly then, the answer would be to ask the computer if it wants to take the test. More than that, it would be to explain to the machine the purpose and origins of the test, its background, and its relevance in a society undoubtably skeptical about the prospect of machine intelligence. With that done, the prospect of trying to imitate a human intelligence -- pitted against a genuine human as part of what is clearly an academic exercise -- becomes more appealing. And lets not forget, as with the Imitation Game, it works both ways: the human trying to imitate the machine is just as worthwhile, and allows just as much subtlety.
Where Turing Test machines -- the Eliza clones -- have fallen down is in their assumption that natural language parsing can be done without intelligence, whereas the reverse is true: the intelligence must come first, then you can try and teach it a language.