It is often very difficult for a person who is just starting to study computer science to understand just how inefficient algorithms get, and how quickly they get inefficient. Today, I did a quick study for myself. Not only did it provide insight into just how inefficient some algorithms get, it also showed me how well (a Macbook) my computer runs, and just how big powers of ten are...

For posterity, here is the experiment. I deliberately set out to run a simple, cheap program that is horribly inefficient. Here it is, in all its Java glory:

//File name: Complexity.java
public class Complexity
{
public static void main(String args)
{
for(int a = 1; a <= 10000; a++)
{
for(int b = 1; b <= 10000; b++)
{
for(int c = 1; c <= 10000; c++)
{
for(int d = 1; d <= 10000; d++)
{
for(int e = 1; e <= 10000; e++)
{
System.out.printf("%6d, %6d, %6d, %6d, %6d\n", a, b, c, d, e);
}
}
}
}
}
}
}

20 lines. 20 lines was all it took to expand my mind just that little bit further.

As people who do computer science will immediately recognise, this is in O(n^{5}). Horribly inefficient.

"oh but it only has to perform one step right? just print a line out to screen?" --Well, yes, but it has to print the line 10,000^{5} times. This equals 100,000,000,000,000,000,000 iterations (one hundred quinrillion or one hundred trillion, depending on your location). How do I put that in perspective? Well, put it this way: at the end of an hour, the program was roughly up to the line

1, 1, 1, 4000, 10000

which means it has iterated through 40,000,000 (forty million, regardless of where you are) times. This is 0.00000004% of the total iterations it would make, if left untouched until the finish. And that's only in one more hour. Simple mathematics will show that you need 285,192 years, 288 and a half days to finish the program, assuming that the program runs perfectly. (As it is, it stalled a few times when I disabled the screensaver.)

Oldminer says: *This is probably much slower because, by default, standard output isn't buffered. If you changed this to *~~un~~buffered output, I'd expect a 10-100x increase in speed. I'd expect a similar increase in speed if you just increment an int instead of output -- or move the output to the second-to-outermost loop. Indeed... though it'd still take two thousand years.

So, what are the conclusions?

- My computer is well and truly capable of handling algorithms that are horribly inefficient. To my Macbook: I'm sorry I put you through that abuse, and for making your processor cry. You deserve to take a bow.
- More importantly, O(n
^{5}) is not to be taken lightly. 10,000 iterations do not take very long (as a matter of fact, it took mere seconds) but raise that to the power of 5 and you're taking several hundred millenia.
- Powers of ten are not to be taken lightly. 10
^{20} steps in a program can take 285,192 years to run. Likewise, it would take take 210 days (roughly seven months) for the entire population of Australia to fill the MCG (capacity 100,000) for one day each; however, it takes roughly 3,500 days (nearly 10 years) for the population of the USA to do the same, and 38 years for the population of China.

So. When designing programs and algorithms, it is vital to keep track of the complexity of these algorithms and equally important to take the end users' computers into account. Otherwise, you will break the computers of every human on Earth. That's just not gonna fly. No way.