A lot of people harbor misconceptions about the Church-Turing Thesis. While it is a very significant statement about mathematics, its scope is not as broad as some would make it.

The Church-Turing thesis (developed by Alan Turing and Alonzo Church, from their work on entscheidungsproblem) says that any effective method can be reproduced by a Turing machine, hence any problem which can be solved by an effective method can be solved by a Turing machine. Here "effective" is simply a synonym for "mechanical" or "algorithmic", which I suspect is a cause for a lot of the confusion.

An effective method, in this context, is any algorithm which:

  • Has a finite number of steps (and each step is well defined).
  • Will (barring errors) always produce the same result given the same input.
  • Can be carried out by a human using only a paper and pencil, and requiring no special insight to perform (this last statement is a troubling one, since "requiring no special insight" is not a terribly technically defined term).

A universal Turing machine is a logical construct which can exist in multiple internal states, and can read and perform simple functions on some data stored in a linear fashion (traditionally on a tape). It can reproduce the functioning of any Turing machine (see below).

Church first determined that any effectively calculable algorithm can be solved by an automation, while Turing demonstrated that the actions of any such automation can be reproduced (on a logical level) by a Logical Computing Machine (LCM, which eventually became known as aTuring machine). Hence any device capable of solving such problems can be known as Turing equivalent.

This means if you have two machines which are both Turing-equivalent, you can be assured that both have the same logical functioning, and any problem which one can solve, the other can solve (though one might be able to solve it sufficiently faster). Technically, the ancient IIsi sitting on my desk and the shiny new g4 on yours are doing essentially the same things in a logical sense. Yours is just able to do them faster, and has possibly has more convenient and impressive systems of input and display.

This does not mean that, "All problems which can be solved, can be solved by a computer" or the like. There are problems which cannot be formulated in a logical sense, much less solved by an effective (i.e. mechanical) method.

One of the more interesting questions brought up by the Church-Turing thesis is, "Can the human mind be considered a Turing equivalent machine?"

If the functions of our minds are solely dependent on our brains (or perhaps rest of the body, to a lesser extent), and the functioning of our brain is solely dependent on neurons which always fire the same way when a certain action potential is reached, then our minds would be Turing machines. Neural science has not yet proven this to be the case, however. Though it is making heroic steps towards that direction, the functioning of neurons is a delicate thing, possibly influenced by quantum fluctuations and who knows what else.

Still, a mind is capable of systematic logic, so isn't it a Turing machine? This is faulty reasoning, however. All Turing machines are capable of systematic logic, but not all things which are capable of systematic logic are necessarily Turing machines. Further, the functioning of our minds is part of the definition of an "effective algorithm", making the any logic trying to prove our mind as a Turing machine somewhat circular. Our minds might be Turing equivalent-plus (Te+) machines; that is, they might incorporate all the functioning of Turing machines, plus some additional functioning.

To some extent this is a semantic quibble. Technically, no device in existence is a Turing machine, since a Turing machine is a purely logical construct (this is why we call actual computers "Turing equivalent" rather then simply "Turing machines"; the distinction is often blurred when not important, however). By actually building something you are bound to incorporate some additional functionality. In most cases this functionality does not affect the logic of the machine, so we would not want to call it a Turing equivalent-plus machine.

I assert that it is possible to build such a machine, however, that does everything that a Turing machine does (in a logical sense), and more. We take, as an initial blueprint, a universal Turing machine, then rig it to halt after a certain random number of steps (dependent on some quantum event), if it dosn't halt before then. Such a machine would be theoretically capable of solving any problem a Turing machine could, but would not do so reliably (since it would sometimes halt before finishing a problem). It would also not be subject to any halting problem, since it will always stop eventually (though it will not be able to solve any problem a traditional Turing machine couldn't, so this is a hollow improvement). Most importantly, the functioning of a such a machine could not be reproduced by a traditional Turing machine, since a traditional Turing machine would have no way of predicting when the Turing equivalent-plus machine will halt.

It is theoretically possible for a more elegant version of such a machine to be able to solve problems in a structured by (to some degree) randomly determined way, and thus be able to solve problems which a traditional Turing machine cannot, or even analyse its own halting history and draw conclusions based on that. True, any additional functionality that a Te+ machine has will likely come with a price, it is possible such a machine might prove useful in some circumstances. It is then possible that the human mind might be some sort of Turing equivalent-plus machine, thus be able to do things which a traditional Turing machine cannot.

Log in or register to write something here or to contact authors.