It all depends on how you measure size. After all, the ratio of N^N and 10^N will only get larger as you make N larger. In particular, it will be off by a factor (N/10)^N, which even for modest N is much closer to N^N than 10^N is!

However, the difference and ratio are not the universal measures of comparing two numbers. In this case, it appears that JerboaKalinowski and mrob27 consider the relevant measure to be the ratio of the logarithms of the logarithms. For JerboaKalinowski's case, that is, a particular calculation, this may be true. One might object to mrob27 assuming for hypercalc that these collossal ratio and even ratio-of-logarithm errors are irrelevant -- but with numbers as large as these it seems unlikely the ratio is really the quantity one cares about.

One real-world example I can think of which comes close to this is in thermodynamics, with the partition function. This is hard to calculate precisely, but it doesn't matter, as you generally take its logarithm to get the entropy. This converts the very large fractional errors into very large additive errors. However, since the use of the entropy is restricted to differences, and in particular its derivative, having all entropy figures off by the same amount is no problem at all.