Developing a mathematical sense of infinity can be fraught with peril and contradiction - many false mathematical proofs depends on an obscured assertion about infinity (or its close relation, zero) that allows nonsense to follow. Attempts to juggle notions such as division by zero, or to make sense of '0/0' tend to crash and burn horribly too. Renditions such as 'a number larger than any other' or 'the number of times zero has to be added to itself to get something non-zero' turn out to make as much sense as 'a square circle'. A passing familiarity with calculus, limits and infinitesimals often confuses things further, and of course people feel that, intuitively, there is something that should be called infinity. So rather than just asserting that things like '1/0=infinity' simply don't work, I'd like to give an overview of why they're misguided approaches. To do so, it seems necessary to give some insight into what it is mathematicians do with numbers (when we bother with them at all), and to prevent anyone feeling cheated out of something valuable, give some examples of where a notion of infinity can be usefully bolted to a mathematical framework.

Mathematical Playgrounds

My first observation is that the problem doesn't really arise from a problem with infinity but rather with the casual usage of mathematics, outside of mathematical disciplines. Whilst numeracy is a vital day-to-day skill, high level mathematics isn't, and people can usually get by with various rules of thumb for arithmetic. Unfortunately, rules like 'anything divided by itself is 1' or 'if a/b = c, then a = bc' should really be qualified by various regulations and exceptions that, when omitted, cause problems whenever a zero crops up.

In fact, even claiming the right to use division can be a step too far, mathematically. Often, mathematicians find it helpful to tackle problems not as they stand, but in terms of a more general framework. They choose a playground (somewhere to do the maths) and some toys (something to do maths with) then see what they can create as a result. Sometimes it can be even more general than that- a set of toys to use in any playground. For instance, the theory of metric spaces equips a space X (and we don't care what - it could be anything from the set containing 0 and 1 to the entire complex plane, or not numerical at all- nodes on E2, or Elephants in Africa) with a rule known as the metric, which, given two things from X, returns a number meant to indicate the 'distance' between them. All that we ask is that this metric be well-behaved in certain ways- an object is no distance from itself, x is as far from y as y is from x, and so on.

Why bother? Well, if we can prove something in this seemingly sparse environment, then it's true for all examples of metric spaces we care to think of. For example, both the real numbers and the complex numbers can be thought of as metric spaces in a fairly natural way. But there are many things that are true of the complex plane that do not hold for the reals, and vice versa. So a theorem you prove as a real analyst is worthless to complex analysts unless they can prove it too. If you leaned on some of the special properties of the reals, this might not even be possible. If, however, your argument was couched entirely in terms of metric space properties, then it readily transfers over to the complex numbers too. This abstraction also helps to throw into relief the fundamental differences and similarities between different mathematical environments. Some of the most powerful mathematics arises when links are found between seemingly disconnected topics that allow difficult problems in one to be tackled as easy problems in the other.

A playground for arithmetic

Having hopefully convinced you of the power of abstraction, it's time to look at an environment where we do basic arithmetic on numbers - the kind of place where it's tempting to throw infinity into the mix. Whilst my previous example (metric spaces) was from analysis (it's actually a special case of the even more abstract study of topology, which goes even further by ditching the notion of a metric and works in set-theoretic terms), the natural place for arithmetic is Algebra. So here's an algebraic structure.

Groups (roughly)

We can turn a set G into a group by pairing it with a binary operation * - that is, one which takes pairs (hence binary) of objects from G and returns another object from G. We also demand that:

  • * is associative, meaning (a*b)*c=a*(b*c)
  • * has an identity e in G, meaning applying * to any object and e just gives you the object back
  • any object in G has an inverse, such that applying * to a and the inverse of a gives us, magically, that identity again.

Which probably sounds like so much gibberish, so an example is in order.

The Integers, Z, are the 'whole numbers', including 0 and all the negative numbers. If we use addition as our operation *, and write a+b to denote the action of adding a to b (that is, more formally, applying the operation * to the pair a,b), then we can see that a group has emerged- 0 is the identity, since 0+a=a; and -a is the inverse of a, since a+-a is 0, which we just confirmed was the identity.

The notation * suggests we could try multiplication instead of addition and get a group, but actually we'll come unstuck. An identity is fine, since 1*a=a for any a we might chose from Z (this stipulation is important!); but you can't find a value in Z that serves as an inverse to (for instance) 2- we all know that 2*1/2 is 1 as desired, but 1/2 isn't in our playground Z.

In fact, we don't even have a sense of division yet. The sensible approach is to take a/b to mean a*b-1, where b-1 is the multiplicative inverse of b, if such a thing exists. In Z, you can only divide in this way by 1 or -1 if you intend to stay in Z. We can expand our attention to Q, the set of rational numbers, or even R, the real numbers, and by having more playthings we find more multiplicative inverses. But there will always be one element that refuses to play ball - you can't find a real number which multiplies by 0 to give 1. Here's why.

Arithmetic axioms of the Reals

In addition to these 9 rules, there are three order axioms and a completeness axiom which are necessary to precisely determine the reals. The following list is also satisfied by, for instance, the rationals, but I don't want anyone to feel I'm pulling a fast one by working with a restricted set.

  1. a+(b+c)=(a+b)+c for all a,b,c in R (+ is associative)
  2. a+b=b+a for all a,b in R (+ commutes)
  3. 0+a=a+0=a for all a in R (0 is the additive identity for the group consisting of R with +)
  4. Given any a in R, there is (-a) in R such that a + (-a) = 0. (Existence of additive inverses; this really is a group under +)
  5. a(bc)=ab(c) for all a,b,c in R (multiplication is associative; we suppress the * or . for ease of notation so ab means a times b)
  6. ab=ba for all a,b in R
  7. 1a=a1=a for all a in R
  8. Given any a in R other than zero, there is a-1 in R such that aa-1=1
  9. a(b+c)=ab+ac for all a,b,c in R (multiplication distributes over addition)

Note that I'm not establishing a self-fulfilling prophecy by asserting axiom 8 above. By saying that every non-zero real number has an inverse, I'm not ruling out the possibility of a 0-1, just the necessity of one - that is, wonderful if you can find one, but things won't break without it. Sadly, it turns out that the combined weight of these axioms removes the possibility of an inverse for 0, as I claimed:

Theorem: a0 = 0 for all a in R
0+0 = 0 by axiom 3.
So a(0+0) = a0, multiplying by a.
Hence a0+a0 =a0 by axiom 9.
If you believe in subtraction, you can now conclude a0=0. But we never asserted anything about subtraction, so for the sake of rigor: (a0+a0) + (-a0) = (a0)+(-a0), adding (-a0) to each side.
Then (a0+a0)+(-a0) = 0 by axiom 4.
So (a0)+ (a0+(-a0))= 0 by axiom 1.
So a0 + 0 =0 by axiom 4
Hence, after copious axiom chasing, a0 = 0 by axiom 3.
Corollary: 0 has no multiplicative inverse.
Suppose a0 = 1.
By Theorem, a0 = 0.
So, 0=1.
But 0 is not 1 (this is axiomatic for a field (which the reals turn out to be) or just bleedingly obvious if you're not a mathematician.

So we find ourselves banned from writing 1/0 at all. Nor can we write 1/∞, because we don't even have an ∞ in our playground in the first place, to even contemplate finding an inverse of. So that's why these notions are meaningless in every day 'real' mathematics.

Building a better playground?

But still the question arises - why can't we just define 1/0 as ∞? (or equivalently 1/∞ as 0) Well, if you really wanted to, you could - posit a set with a nifty name (such as the extended reals, or hyper reals) that consists of the real numbers and a new symbol, ∞, and give yourself an inverse for 0. Great stuff. But if you've got a bigger playground, it better be at least as fun as the old one, right?

Theorem: if ∞ is a real number, all real numbers are the same
Let a and b be real numbers. Then a0=0=b0 by the previous theorem.
So a0∞=0∞=b0∞
But 0∞=1. So a1=1=b1.
But a1=a and b1=b. So a=b=1.
Corollary: this new creation is a rubbish playground.

This brings us to the central objection. You can try anything you like with mathematics. Sometimes it will be useful and sometimes it won't be. The axioms of the reals are useful, but so is the ability to tell them apart. By tacking on ∞ as just another real number, one either has to abandon the axioms (lest the above result hold) or lose useful properties (since the above will then hold); whereas adding it as something other than a number obviously complicates things (working out how to weave it into the existing rules for calculating things), and still feels like dodging the question. Ultimately, it's rarely helpful to adopt either approach.

Some useful uses

Most mathematical uses of infinity don't involve treating it as just another number (be it one 'bigger than all others' or 'the inverse of 0'). Rather, infinity is often encountered as infinite - an adjective for describing mathematical objects, rather than being a mathematical object in its own right. In particular, an entire hierachy of infinities known as cardinality is required to describe the size of sets such as the natural numbers or the real numbers (neither is finite, but the reals are in a sense so much more infinite than the naturals). The following environments, however, roll up their sleeves and make an attempt at using infinity as an object.

The degree of a polynomial

A polynomial in a variable x is a sum of powers of x (with non-negative integers as exponents), such as x2-1 or x3 + 3x -7. Broadly speaking, the degree of the polynomial is the highest power of x which appears- in the examples, 2 and 3 respectively. But numbers themselves are polynomials - '5' is the polynomial 5x0, a degree 0 polynomial. Given two polynomials and their degrees, it'd be nice to know something about the degree of their sum, or product, or difference... and but for a small wrinkle, this is possible.

At first glance, it seems that deg(f+g) or deg(f-g) ≤ max{deg(f), deg(g)} and deg(fg)=deg(f)+deg(g). But this breaks when you consider the zero polynomial (the constant 0) since 0*f=0 for any polynomial f. We get around this by defining deg(0) to be -∞, where -∞ is a symbol with the properties that

  • -∞ < z for any integer z
  • -∞ + -∞ = -∞
  • -∞ + x = x + -∞ = -∞ for any integer x.

Using this trick, one can make assertions about the degree of a polynomial with confidence that the zero polynomial, an often useful tool, can't break things.

Tending to Infinity

In analysis (especially, calculus), the notion of a limit is vital - sequences of numbers can tend to a limit, or a function may have limiting behaviour as it's arguments grow (or shrink). But a sequence needn't get arbitrarily close to a particular value, nor need a function be all that well behaved.

We say that a sequence tn tends to infinity if, for any real number P, there is an index N such that for any term tn after tN, tn is greater than P. That is to say, any attempt to put an upper bound on the sequence fails. Note that it is the limit which is described as being ∞ - any given term, no matter how large, is still an actual number.

Projective Geometry

Algebraic geometry is (in my experience) simultaneously fascinating and very difficult, probably for the same reason- you routinely bump into the idea of 'stuff at infinity'. For instance, in describing the straight lines through the origin in 3-dimensional space, you can almost always characterise the line with just two parameters, rather than 3. This is accomplished by fixing a plane (for instance, Z=1) then noting the x,y intercept of the line with that plane. However, lines parallel to the plane will never meet it, so there is no intercept - however, by knowing this parallel condition you can describe it as a line in the 2-dimensional plane Z=0, again with only two parameters. This illustrates the result that N-dimensional projective space can be thought of as, mostly, N-dimensional affine space, in disjoint union with the 'stuff at infinity' - a projective space of dimension N-1. This region can also be thought of as 'where parallel lines meet', and makes possible the study of, for instance, elliptic curves, in a consistent way.


So, I hope that clarifies a few things, and you feel the gains are worth abandoning the 'obvious but wrong' approach. I've tried to make this as accessible to a non-mathematician as possible (hopefully without being too patronising), but after a few years at University it's hard to gauge just what level that should be. Any feedback much appreciated.