(See the pi defense for what prompted this, and normal number for a definition of the concept)

One frequently hears it stated without proof that every finite sequence of digits occurs in the decimal expansion of pi. There's a good reason why you never hear the proof, though: we don't know a proof; we don't even know if it's true!

Other variations of this include claims that the digits of pi "behave randomly". Of course they don't (every time I've looked at the 3rd digit after the decimal point, it was 1; that doesn't look very random to me).

What it boils down to is proving that pi is normal. Now, we know a great deal about pi, so how come we don't know this? Part of the reason appears to be the mixed continuous-discrete nature of normality. Almost everything we know about pi comes from "continuous" arguments; these don't really help when it comes to examining the string of digits of pi, which is essentially a discrete object. (Note, however, that Plouffe et al., with the Bailey-Borwein-Plouffe Algorithm, have a formula for rapid calculation of the `n`th hexadecimal digit of pi; this is a breakthrough, but we still don't know if it helps prove pi is normal)

It's particularly frustrating, because if we just pick a *random* number it will be normal almost always. But that means nothing for any given number (for instance, we know about lots of rationals, but we also know a rational number can *never* be normal). It may be that Mathematics is not ready for such questions.