Accurate to >99.4%, this approximation appears in Knuth's Fundamental Algorithms (volume 1 of The Art of Computer Programming): to get base-2 logarithms, just add the natural logarithm and the common (base 10) logarithm. Knuth proposed using it to get a speedy approximation of binary logarithms, when "all" one has on one's desk is a book with values for these two logarithms.

Another interesting use, more fitted to modern days, is to use it to approximate natural logarithms in one's head. The natural logarithm is (roughly) the difference between the base-2 and common logarithms.

#### Example.

What is the approximate density of prime numbers with 600 digits? That is, what proportion of numbers around 10600 is prime? DO NOT USE A CALCULATOR!

Very approximately, the density of primes around n is 1/ln(n) (see the prime number theorem for the real deal). What's ln(10600)?

If we knew what ln(10) is, we'd know the answer -- 600ln(10). Don't laugh! People used to know things like that! We don't, but we can approximate ln(1000)...

log10(1000)=3, no problems there. What's log2(1000)? I don't know, at least not by heart. But I do know that log2(1024)=10. So a reasonable estimate for ln(1000) is 10-3=7.

So ln(10600) = 200⋅ln(1000) ≈ 200*7 = 1400.

Approximately one in 1400 numbers around 10600 is prime.

The "real" number is ~1381.5511; we didn't do too badly! This is of course ln(10600), the value appearing in the prime number theorem. That itself is an approximation, but the error term for it is reasonably small.

The proof of this approximation is easy, and I shan't spoil it for anyone. It also shows what might be obvious: the error in this approximation is a constant factor; you will always overestimate by ~0.5%. Most of the error in the above example came from overestimating ln(1000), not from the ridiculous formula that is this node's title.