Dealing with large numbers can be confusing, once the numbers get so big you need to use exponents to express their exponents. I recently dealt with a number about the size of 102.1 x 10343, call it N, a number whose exponent has 343 digits. I wanted to see what it was when raised to its own power.

Firing up hypercalc ("just try and MAKE me overflow!") I was surprised to see that my number (N) when raised to its own power, gave the result 10N.

(That's 10102.1 x10343.)

But in fact hypercalc is correct, and, to normal levels of accuracy, NN = 10N when N is very large, as in the present case.

To raise to the power of N, we multiply a number by itself N times. If the number is a power of 10, with exponent exp, then we add exp digits for each such multiplication. So 10N will have N digits, since 10 = 101 and 1 times N is N.

N itself has an exponent of log10(N), and so NN has log10(N) * N digits.

So NN has log10(N) times as many digits as 10N.

So the exponent of NN (putting NN into the form 10exp) has log10(log10(N)) times as many digits as N.

Which means it has around 343 times as many digits as N (because N = 102.1 x 10343.)

So the second exponent has less than 3 times as many digits as the second exponent of 10N (because log10(343) < 3.)

So the third exponent is less than log10(3) out.

Which means that increasing the third exponent of 10N by one would give a value significantly larger than NN, since log10(3) < 1.

So, given N large enough to force us to use these 'third exponents', 10N is pretty much "equal" to NN, which seems surprising, shocking even, especially considering how large N is in the first place.

Now I'm not saying that NN and 10N are converging - the difference is plainly growing. What's happening is that the difference is growing more slowly than the numbers themselves are, with the result that at some point, our notation gives out, and to record the difference, we'd have to include more digits than it will allow.

If you're having problems believing this (and I did myself) here's a different proof, showing the same thing in more detail, and more understandably, from hypercalc author mrob27 (whose large number pages at http://home.earthlink.net/~mrob/pub/math/largenum.htm will be of undoubted interest to anyone still reading. Go there.)


  Your explanation of how the exponents work is fine (based on an intuitive
  notion of "number of digits") but I would prefer something more like this:

  Given R2 = 10 ^ (2 x 10^343), compare 10^R2 to R2^R2.

  We need to use the fact that (a^b)^c = a^(b*c), and also (a^b)*(a^c) =
  a^(b+c)

  10^R2 = 10 ^ [10 ^ (2 x 10^343)]

  R2^R2 = [10 ^ (2 x 10^343)] ^ [10 ^ (2 x 10^343)]
        =  10 ^ [(2 x 10^343) x 10 ^ (2 x 10^343)]
        =  10 ^ [2 x 10^(343 + 2 x 10^343)]
        =  10 ^ [10 ^ (log(2) + 343 + 2 x 10^343)]

  Now notice that log(2) + 343 + 2*10^343 is only a little bit more than
  2*10^343. In fact, when the addition is performed, the log(2) and the
  343 will get completely lost in the roundoff error unless at least 342
  digits of precision are used in the calculation. So the most accurate
  value we can calculate for R2^R2 is 10^[10^(2x10^343)], which is
  10^R2. (q.e.d.)
(end quote)

This is a particular instance of the general rule that as the exponent gets larger and larger, the value of the mantissa becomes increasingly irrelevant for the size of the result.

It all depends on how you measure size. After all, the ratio of N^N and 10^N will only get larger as you make N larger. In particular, it will be off by a factor (N/10)^N, which even for modest N is much closer to N^N than 10^N is!

However, the difference and ratio are not the universal measures of comparing two numbers. In this case, it appears that JerboaKalinowski and mrob27 consider the relevant measure to be the ratio of the logarithms of the logarithms. For JerboaKalinowski's case, that is, a particular calculation, this may be true. One might object to mrob27 assuming for hypercalc that these collossal ratio and even ratio-of-logarithm errors are irrelevant -- but with numbers as large as these it seems unlikely the ratio is really the quantity one cares about.

One real-world example I can think of which comes close to this is in thermodynamics, with the partition function. This is hard to calculate precisely, but it doesn't matter, as you generally take its logarithm to get the entropy. This converts the very large fractional errors into very large additive errors. However, since the use of the entropy is restricted to differences, and in particular its derivative, having all entropy figures off by the same amount is no problem at all.

Log in or register to write something here or to contact authors.