Big-O notation describes the time/processing requirement for an algorithm. Only the highest order term (fastest growing) is displayed,
so for very small input sets it may be inaccurate. For example, if an algorithm takes x^3 + 10000x seconds to perform a task, the Big-O (order) is x^3 -- clearly misleading. As x tends toward infinity, however, it is more accurate.

Note: in the following, we assume that all functions map from a totally ordered set to the reals. Usually when discussing big-oh notation, functions map from nonnegative integers to nonnegative integers or from nonnegative integers to nonnegative reals, but the following defintions are more general.

A function f is O(g) iff there exist a nonzero constant M and a constant N such that, for every n > N, |f(n)| <= |Mg(n)|. Exercise: let P be a totally ordered set with a greatest element m, and let f, g be functions from P to R such that g(m) != 0. Prove that f is O(g).

Notice that this is a weak bound. For example, f(x) = x2+7 may be said to be O(x2), but it may also be said to be O(x3) or O(ex).

Change <= to < in the above, and you have little-oh notation. Change <= to >=, and you have big-omega notation. Change <= to >, and you have little-omega notation.

There is also big-theta notation. f is Θ(g) iff there exist nonzero constants M1 and M2 and a constant N such that, for every n > N, |M1g(n)| <= |f(n)| <= |M2g(n)|. This is a stricter version of big-oh and big-omega notation. f is Θ(g) iff f is both O(g) and Ω(g).

One may use the above to define a relation: f ~ g iff f is Θ(g). It is easy to show that ~ is reflexive, symmetric, and transitive. Thus it is an equivalence relation, and partitions the set of functions into a number of equivalence classes. We call these complexity classes; the complexity class containing f is called Θ(f). Then we can say things like f ∈ Θ(g), which can make definitions and proofs simpler to write in some cases.

Important classes for real-valued functions of one integer variable:

This notation is used as a measure of complexity.

Personally, I find the standard definition ("if there exists a constant such that for all N>n") to be annoyingly obfuscated. A much simpler way (IMO) to state it:

• f(x) is in big-oh(g(x)), if limn->inf.( f(n)/g(n) ) is less than infinity.
• f(x) is in small-oh(g(x)), if limn->inf.( f(n)/g(n) ) is zero.
• f(x) is in theta(g(x)), if limn->inf.( f(n)/g(n) ) is finite but not zero
(Assuming that all f(x) and g(x) only have positive values, otherwise you have to take absolute values)

Big-oh notation is a way of describing the change in how long an algorithm will take to run based on the number of input values it must process. It is independent how fast the computer can make calculations. It is also independent of the type of programming language used to execute the algorithm. Big-oh notation is the highest order nomial of a runtime efficency equation.

Let n equal the number of input values for a given algorithm.

Here are some basic types of big-oh notations:
The ones that take the least amount of computations are listed first.

• Constant: O(1) - pronounced "order one" or "constant order"
• Logarithmic: O(log n) - pronounced "order log n" or "logarithmic order"
• Linear: O(n) - pronounced "order n" or "linear order"
• "N log n": O(n*logn)
• Cubic: O(n3) - pronounced "order n cubed" or "cubic order"
• Exponential: O(2n) - pronounced "order two to the power of n" or "exponential order"

If an algorithm has a constant order, O(1), it takes the same amount of time to process 1 element as it does to process 1000 elements. It does not matter how many input elements there are. The run time will always be the same for an order one algorithm. For example, an algorithm is made to return the first number of a list of any size. The algorithm does not care how big the list is. All it needs to complete its objective is the first number of the list. Thus, the big-oh notation is order one.

An algorithm with a linear order, O(n), will take n times as long to run if it has n input values only in the worst case scenario. For example, consider an algorithm that must search a list (or array) of numbers for the highest value present. At best the algorithm must compare n elements and only at the first comparison assign that maximum value to a variable. At worst it must compare n elements and continually reassign the maximum value (for further comparision) to a variable. In the worst case, it still has to go through n elements. Therefore the algorithm is O(n).

C++ example of O(n) algorithm:
// Assume `list[]` is an initialized array with `n` elements.
``` int largest_num(int list[], int n) {    int max = 0;    for( int i=0; i<=n; i++ )       if( list[i] > max )          max = list[i];    return max; } ```

Sample cases (each with 5 elements in the array):
Best case - 5, 4, 3, 2, 1. One assignment to max is made.
Worst case - 1, 2, 3, 4, 5. Five assignments to max are made.

The same kind of logic applies to all other notations. For a quadratic order the runtime takes n2 as long (in the worst case possible) to process n elements, and so on.

Big-oh notation is useful when comparing the efficiency of various sorting and searching algorithms to one another. Listed below are some algorithms and their corresponding big-oh notations:

More information on algorithms and their runtimes can be found at the NIST's Dictionary of Algorithms and Data Structures web page - http://www.nist.gov/dads/

Source for this writeup: Barron's How to Prepare for the AP Computer Science Advanced Placement Examination.

It is often very difficult for a person who is just starting to study computer science to understand just how inefficient algorithms get, and how quickly they get inefficient. Today, I did a quick study for myself. Not only did it provide insight into just how inefficient some algorithms get, it also showed me how well (a Macbook) my computer runs, and just how big powers of ten are...

For posterity, here is the experiment. I deliberately set out to run a simple, cheap program that is horribly inefficient. Here it is, in all its Java glory:

```//File name: Complexity.java
public class Complexity
{
public static void main(String args)
{
for(int a = 1; a <= 10000; a++)
{
for(int b = 1; b <= 10000; b++)
{
for(int c = 1; c <= 10000; c++)
{
for(int d = 1; d <= 10000; d++)
{
for(int e = 1; e <= 10000; e++)
{
System.out.printf("%6d, %6d, %6d, %6d, %6d\n", a, b, c, d, e);
}
}
}
}
}
}
}
```

20 lines. 20 lines was all it took to expand my mind just that little bit further.

As people who do computer science will immediately recognise, this is in O(n5). Horribly inefficient.
"oh but it only has to perform one step right? just print a line out to screen?" --Well, yes, but it has to print the line 10,0005 times. This equals 100,000,000,000,000,000,000 iterations (one hundred quinrillion or one hundred trillion, depending on your location). How do I put that in perspective? Well, put it this way: at the end of an hour, the program was roughly up to the line

`     1,      1,      1,   4000,  10000`

which means it has iterated through 40,000,000 (forty million, regardless of where you are) times. This is 0.00000004% of the total iterations it would make, if left untouched until the finish. And that's only in one more hour. Simple mathematics will show that you need 285,192 years, 288 and a half days to finish the program, assuming that the program runs perfectly. (As it is, it stalled a few times when I disabled the screensaver.)

Oldminer says: This is probably much slower because, by default, standard output isn't buffered. If you changed this to unbuffered output, I'd expect a 10-100x increase in speed. I'd expect a similar increase in speed if you just increment an int instead of output -- or move the output to the second-to-outermost loop. Indeed... though it'd still take two thousand years.

So, what are the conclusions?

1. My computer is well and truly capable of handling algorithms that are horribly inefficient. To my Macbook: I'm sorry I put you through that abuse, and for making your processor cry. You deserve to take a bow.
2. More importantly, O(n5) is not to be taken lightly. 10,000 iterations do not take very long (as a matter of fact, it took mere seconds) but raise that to the power of 5 and you're taking several hundred millenia.
3. Powers of ten are not to be taken lightly. 1020 steps in a program can take 285,192 years to run. Likewise, it would take take 210 days (roughly seven months) for the entire population of Australia to fill the MCG (capacity 100,000) for one day each; however, it takes roughly 3,500 days (nearly 10 years) for the population of the USA to do the same, and 38 years for the population of China.

So. When designing programs and algorithms, it is vital to keep track of the complexity of these algorithms and equally important to take the end users' computers into account. Otherwise, you will break the computers of every human on Earth. That's just not gonna fly. No way.

Log in or register to write something here or to contact authors.