(D I S C L A I M E R : I am an imaginary philospher, not a real mathematician, and my intention is to supplement, not to correct, rp's writeup, above.)

Mathematical truth is widely thought to be a kind of absolute, pure, eternal truth - scientific theories, philosophies, theologies may come and go, but mathematical truth, that bedrock of certainty, is the perfection to which these lesser truths aspire.

But the last hundred-odd years have seen outrageous controversy over how well founded this faith in mathematical certainty really is.

That's not to say there haven't been crises in the history of mathematics before - the Pythagorean discovery and naming of irrational numbers, for example. The very word 'irrational' - consider that the Greek words for ratio and rationality were the same - tells us how deeply this crisis was felt.

The current crisis started when Georg Cantor came up with his theory of infinite sets. Implicit in the notion of counting numbers is that any number has a successor - you can always add 1. Cantor was interested in what happens if you take this to the infinite limit - and beyond.

If you take the simple sequence: 1, 2, 3, ... and carry on adding numbers forever, Cantor decided, you will end up with a number which he called omega.

Having reached this number, you can then carry on in just the same way, if it is really a number, and continue the sequence: 1, 2, 3, ... omega, omega+1 ...

Cantor looked at a lot of sequences like that, and devised a clever scheme to make sense of the properties of the numbers you end up with when you extend the sequences. He got as far as the number:

omegaomegaomegaomegaomega...

or: omega to the power of omega to the power of omega, and so on, omega times. (Forever.) This number, called epsilon zero, is the first for which the equation x = omegax is true.

Cantor's theory, believe it or not, made it possible to talk sensibly about such numbers by finding a natural ordering in them. This required defining numbers in terms of Cantor's Set Theory for it to work, and numbers defined in this way are known as ordinals.

Infinite ordinals like omega correspond to infinite sets, and infinite sets can have infinite subsets. To Cantor, though, the interesting thing about infinite sets is that they can match element for element (or "map 1 to 1, and onto" in more modern lingo) with their infinite subsets. Sets whose elements can be matched like this are said to have the same cardinality. Epsilon zero is a larger ordinal than omega (implying, according to Cantor's construction, that omega is a subset of epsilon zero) but since their elements can be mapped one-to-one the two have the same cardinality, an infinite cardinal known as aleph null.

Even so, using his famous diagonal argument, Cantor was able to show that some numbers were bigger (had greater cardinality) even than omega and epsilon zero, by showing that the real numbers could not be matched one for one with the integers. They must therefore have a greater cardinality. This was a revolutionary and upsetting idea, that there could be smaller and larger infinities - people, in effect, had been working with a relatively imprecise, and therefore mathematically suspect, notion of 'infinity' until then.

This work of Cantor's provoked much spleen from some of the leading mathematicians of the day. Poincare said: "set theory is a disease, from which I hope future generations will recover." Accusations of fantasy and insanity were made. Under pressure from these attacks, Cantor had a nervous breakdown, and never obtained a prestigious academic position.

Nonetheless, the ideas were fruitful enough that they became widely adopted and mathematics was largely re-defined in terms of set theory. Enter Bertrand Russell.

Russell noticed that Cantor's proof, at a critical juncture, made use of a set which could be defined as 'the set of all sets that are not members of themselves.' The question about this set, of course, is 'is it a member of itself?' - if it is, it isn't, and if it isn't, then it is.

Russell lost no time in bringing everyone's attention to this paradox, and similar ones such as Berry's Paradox. His results were alarming because, after re-defining number theory (and other areas of mathematics) in terms of set theory, an inconsistency in set theory seemed very much like a deep inconsistency in mathematics itself.

David Hilbert proposed a programme of activity to solve these problems. Reasoning that they arise because of fuzziness in the terminology of set theory, he prescribed a further re-definition of mathematics using formal systems.

Armed with indubitable axioms, expressed in a formal language with strict rules for going from one statement to the next, Hilbert argued, mathematicians could achieve incontestable truths. Mathematics would be safe again. To make sure there was no cheating, Hilbert specified the derivation rules (for going from one statement to the next in a proof) should be so strict they could be checked by a machine.

Under this regime, the so-called Axiomatic Method, Hilbert optimistically hoped to show, once and for all, that mathematics was:

  1. complete -
    • every true statement can be proved
  2. consistent -
    • no false statement can be proved
  3. decidable -
    • there is a mechanical procedure to determine the truth of any mathematical statement.
These results, if they could be established, would enshrine forever the notion of mathematical truth as pure eternal verity, untainted by the limitations of its humble human originators. I think it is no exaggeration to say that most of us tend to think of mathematical statements as though mathematics does indeed meet these requirements.

Hilbert's programme proceeded for some time, pursued by many brilliant mathematicians, until Kurt Godel put his famous spanner in the works by showing that for any formal system good enough to be a candidate for Hilbert's programme, there was always a statement in it that was true but unprovable; specifically: the statement that asserted its own unprovability in that formal system.

This gave the lie to Hilbert's first two goals (please see Godel's Theorem for the details.) Godel didn't show that mathematics is inconsistent (Hilbert's second goal) but did establish that it can't prove itself consistent.

While everyone was still reeling from this blow, Alan Turing enters the picture. Focusing on Hilbert's idea that the steps in a proof could be mechanically checked, he formalised this notion to give a very concrete and specific definition of this mechanical checking (and, some would say, by doing so invented the computer.)

By showing that, in general, there is no mechanical way to predict whether a given computer program will halt, Turing showed that in general there is no mechanical way to decide in a finite time the truth of a given mathematical statement: if a computer was set up to take the statement as input, and then work through all possible proofs one by one until it proved the statement, then a mechanical procedure to determine whether such a program would halt would be equivalent to a procedure to determine the provability of the statement.

So much for Hilbert's third aspiration.

So where does this leave our notion of mathematical certainty? Mathematicians have largely ignored these problems, and continued in the normal way. Apart from the financial advantages of this, it seems reasonable in that (for example) Godel's result is about a strange and rather suspiciously self-referential type of proposition, not one that most people would be interested in anyway, other than as a diversion.

But in the standard set theory formalism (ZFC) there is an example of a naturally interesting proposition which can neither be proved nor disproved (see continuum hypothesis.) This means that the mathematician working in set theory has a choice - she can work in normal ZFC or choose to add the continuum hypothesis as an extra axiom in the system. If she chooses the latter, she will be able to prove a different set of results - a different set of 'mathematical truths' - and yet there is no 'guidance from above' that will establish a 'god-given' validity to the acceptance or rejection of the continuum hypothesis as an axiom.

Intuitionism, referred to in rp's writeup, above, is another example of mathematicians working inside a formalism with a different axiom set. The law of the excluded middle, which cannot be proved or disproved from the other axioms of propositional logic, is eschewed, in much the same way that some mathematicians prefer to work in ZFC without the continuum hypothesis.

It seems we must conclude that mathematics is ultimately founded on both a) the long-term empirical practice of working through the consequences of the various formalisms that we can invent to satisify our notions of universality and non-ambiguity, and b) our own human creativity in inventing these formalisms. Perhaps, in a deeper sense, we should thank the presence of regularity in the cosmos, without which neither a) nor b) would be possible.




This writeup is based, in large part, on the guided tour given by Gregory Chaitin's 'A Century of Controversy over the Foundations of Mathematics' - recommended, as an insider's view - at:
http://www.cs.auckland.ac.nz/CDMTCS/chaitin/cmu.html