This proof is about as simple as they go. Apparently Gregory Chaitin counts bits to get an information theory proof, but this just complicates matters by taking logarithms.

So here goes. Suppose there are just k prime numbers 2=p_{1}<...<p_{k}. Fix some N (we'll see an exact value later). Every number n<N has a representation

n = p_{1}^{i1} ⋅ ... ⋅ p_{k}^{ik}

(this is the easy part of the

fundamental theorem of arithmetic). And by considering sizes, obviously

i_{j} ≤ log_{pj}n ≤ log_{2}n < log_{2}N.

So there are no more than log

_{2}N possible choices for each i

_{j}, and no more than (log

_{2}N)

^{k} possible choices for

*all* combinations of indices {i

_{j}}

_{j} -- and each number 1,...,N must be represented by one of these choices.

But for sufficiently large N,

N > (log_{2}N)^{k}

(note that by our assumptions, k here is some

constant!). So obviously for sufficiently large N we simply don't have enough combinations of powers of k primes to generate N different numbers ≤ N... This

contradiction shows we must have infinitely many prime numbers.

In fact, we can use the above argument to get a rough estimate on the number of primes < N -- a first step to the prime number theorem.