An Machine Intelligence that statisfies Alan Turing's criteria for artficial intelligence according to the Turing Test: i.e. indistinguishable from a human to a remote observer. Is a Turing Intelligence truly intelligent? If it can, for example, hold a conversation can it be said to be self-aware? Is self-awareness a measure of intelligence? If machine is so complex it can fool a "remote observer" is self-awareness a sympton of such complexity?

Is "Is a Turing Intelligence truly intelligent?" an intelligent question?

In Turing's time, many a scientist would have answered: 'no'. We don't know what we mean by intelligence, it cannot be observed directly, so it's not a scientifically meaningful term. Let's look at observable behaviour first. This is basically the viewpoint of behaviorism, and it is Turing's own approach to the problem (namely, to *replace* the question 'Can machines think?' with the question 'What would it take for machines to appear to think?')

Blandly to ask, "Yes, but can these machines *really* think?" defies the whole approach! You'd better come up with something more constructive.

(Just my opinion, of course.)

Alan Turing postulated that a machine could be created that would be indistinguishable from a human when presented in a game known as the Turing test. However, a better definition of machine is necessary.

The machine Turing imagines is a digital computer. A digital computer is one that can store information and carry out specific tasks which are regulated by a control element. The machines and analytical engines of Turing's time and earlier were mainly mechanical. The computation had various positions in which the wheels and gears comprising the machine would be in. It is said that these digital computers could have discrete states or varying positions. Therefore a digital computer could emulate a discrete state machine. A discrete state machine with infinite memory could have infinitely many states. With enough states the discrete state machine could imitate humans. Allow me to digress.

Imagine an adding machine and a separate subtracting machine, multiplying machine, and dividing machine. Each separate machine is essentially a digital computer. The machine would store input. It's control element would regulate the appropriate arithmetic action. Imagine a discrete state machine that had memory for four discrete states: that of an adder, subtracter, multiplier, and divider. Supposing the discrete state machine had enough memory available, it could take on more and more discrete states and therefore mimic more machines and possibly humans.

At the end of Turing's essay titled "Computing Machinery and Intelligence" he concludes that intelligent machines are possible, provided enough memory is available. Also, he states that more experimentation is necessary to discover an intelligent machine.

The entire idea of the Turing Test is that once a machine is created that is indistinguishable from a human being (at least through a text-based interface), an observer would have no rational basis for denying that machine the status "intelligent being". All other reasons (e.g. religious doctrine, "common sense", et al.) would necessarily be irrational.

This is owing to the fact that, in the absence of an irrational presupposition, our only means with which to judge a being's 'intelligence' or 'consciousness' is by comparing what we can perceive through our senses with examples from past experience. This is because 'consciousness' and 'intelligence' are not externally observable qualities like, say, 'redness' or 'relative speed'. They can only be percieved through the secondary traits we associate with them, like 'sense of humor'.

Note that this is true regardless of your views on mind/body dualism and subjective vs. objective reality. It does presume that you believe 'consciousness' to be an extant state; if you do not believe this, however, the entire matter is irrelevant, as you do not believe humans are intelligent either. You're also probably either a sociopath or a hardcore nihilist.

The fact that a human built the machine and understands every detail of how it works is of no importance. It is quite possible that consciousness is an emergent trait rather than an intrinsic one; it may simply be the result of a particular sort of complexity. This would mean that understanding every detail about a creature's construction would not necessarily equate to having a complete understanding of that creature. Also, consider this: a being could concievably exist that understands every detail of human behavior, both how and why people are as they are. This being could also have created us. The existence of such a being would certainly not negate human intelligence; indeed, there is no reason this creature couldn't be a human.

Look for this node to become relevant in your day-to-day life about twenty-five years from now.

Log in or register to write something here or to contact authors.