A computable number is a number that can be computed. D'oh.

Of course, the much more interesting case is a number that can not be computed. This may seem illogical to the layman at first, because after all, computing numbers is what computers are all about. But what about non-finite, non-cyclic real numbers? Some of them, like pi and e, are computable. Of course they are infinitely long, so they cannot be computed fully, but we have well-defined methods to compute any finite prefix of them in a finite time.

What about the rest of them, those that don't have a spiffy algorithm to describe them?

Or better yet, what about integers we can specify but don't know the value of (and never will)? They exist. For example, what about the number of steps a Turing machine takes to disprove Goldbach's conjecture? If it is untrue, it can be disproven by finding a counterexample through brute force. Unfortunately Godel's Theorem ensures that there will always be statements which can neither be proven nor disproven, so that the number of steps the Turing machine takes will remain unknown forever. More generally, this is called the Halting Problem.

We call a real number x computable, if it may be described by a Turing machine T, in the sense that when T is input a number n, it outputs the first n digits (say decimal, or whatever you like) of x. Equivalently (and more conveniently, perhaps), there is a (non-halting) algorithm which prints out the digits of x. Computable numbers are also called recursive numbers (when defined through recursive functions rather than TMs) or effectively definable (as distinct from definable numbers, of which there are many more). This definition requires what seem to be more "elementary" tools than a rigorous definition of the real numbers.

Let's think about that definition. The integers are obviously computable. So are the rationals (just fire up a long division algorithm). The square root of 17 is, too, along with sin(e-2.1): any old approximation method shows this. It's a hop and a skip to deciding that all algebraic numbers are computable, and throwing e and pi into your expressions does nothing for leaving the computable numbers. They are (rather trivially) an algebraically closed field.

Yet not all real numbers can be computable, right? A Godel numbering thingy shows that the set of applicable Turing machines (or algorithms or whatnot) is countable (as these are finite constructs over a finite alphabet -- think of "all possible C programs", if you really prefer), whereas Cantor's diagonal argument proves the set of real numbers is not (or does it? more on this in a moment). So computable numbers are a mere drop in the vast sea of the reals. Yet every number you will ever describe, every example you will try and show, will by definition be computable. Remember when you first found out about Cantor diagonalization, and you were worried about where all the extra numbers come from, only to be placated with sqrt(2) and pi? Well, that doesn't quite cut the mustard with the computables. If it's in any way tangible (even to a mathematical imagination), that seems to mean there is an approximation algorithm, so there are no convincing "you need to add this number" examples just round the corner...

The root of all evil, in this case (as in many others) is the Axiom of Choice. That "actually" does let you create a whole lot of uncomputable numbers, but these are numbers which you can't otherwise describe at all. However, many results in mathematics do not rely on Choice at all. In a constructivist approach to maths, we'd want proofs to "build" the numbers, sets, and so forth, which they describe. If you decide you're only accepting numbers spewed out by some Turing machine (instead of having the Axiom of Choice), then the diagonal argument fails miserably, as you could never build the machine for the diagonal number. "Semiproof" of this: it would have to be a finite machine which emulates an infinite list of different machines, to know what digit not to output at each place, which is easy to contradict; you'd "only" need to order the relevant Turing Machines, but that "relevant" bit neatly conceals, of course, the halting problem.

On a side note, you might make a somewhat convincing case for using definable numbers. Constructivism goes out the window, and you'll definitely be adding numbers you can't "point" at, but now it really seems as if you've covered anything you could ever worry about -- there's really no other reasonable way of "specifying" a number. And yet, although this is a proper superset of the computable numbers, it is still countable...

So, if computable numbers are the only numbers you'll ever run into as the result of a calculation or problem (real-world or other), how sure are we that we need the rest of the reals? It makes an interesting exercise (and perhaps more than that) to consider re-founding mathematics with computable elements only. Axiom of Choice problems eliminated! Measure theory reduced to fairly basic book keeping (it works out you need to handle countable unions of intervals, basically, and of course you only have countable sums anyway)! And much more. Of course, by busting the uncountability conspiracy you might be doing quite a few mathematicians out of a lucrative job, but what's that when Truth is at stake?

Log in or register to write something here or to contact authors.