Big-oh notation is a way of describing the change in how long an algorithm will take to run based on the number of input values it must process. It is independent how fast the computer can make calculations. It is also independent of the type of programming language used to execute the algorithm. Big-oh notation is the highest order nomial of a runtime efficency equation.

Let n equal the number of input values for a given algorithm.

Here are some basic types of big-oh notations:
The ones that take the least amount of computations are listed first.

  • Constant: O(1) - pronounced "order one" or "constant order"
  • Logarithmic: O(log n) - pronounced "order log n" or "logarithmic order"
  • Linear: O(n) - pronounced "order n" or "linear order"
  • "N log n": O(n*logn)
  • Quadratic: O(n2) - pronounced "order n squared" or "quadratic order"
  • Cubic: O(n3) - pronounced "order n cubed" or "cubic order"
  • Exponential: O(2n) - pronounced "order two to the power of n" or "exponential order"

If an algorithm has a constant order, O(1), it takes the same amount of time to process 1 element as it does to process 1000 elements. It does not matter how many input elements there are. The run time will always be the same for an order one algorithm. For example, an algorithm is made to return the first number of a list of any size. The algorithm does not care how big the list is. All it needs to complete its objective is the first number of the list. Thus, the big-oh notation is order one.

An algorithm with a linear order, O(n), will take n times as long to run if it has n input values only in the worst case scenario. For example, consider an algorithm that must search a list (or array) of numbers for the highest value present. At best the algorithm must compare n elements and only at the first comparison assign that maximum value to a variable. At worst it must compare n elements and continually reassign the maximum value (for further comparision) to a variable. In the worst case, it still has to go through n elements. Therefore the algorithm is O(n).

C++ example of O(n) algorithm:
// Assume list[] is an initialized array with n elements.
int largest_num(int list[], int n)
{
   int max = 0;
   for( int i=0; i<=n; i++ )
      if( list[i] > max )
         max = list[i];
   return max;
}

Sample cases (each with 5 elements in the array):
Best case - 5, 4, 3, 2, 1. One assignment to max is made.
Worst case - 1, 2, 3, 4, 5. Five assignments to max are made.

The same kind of logic applies to all other notations. For a quadratic order the runtime takes n2 as long (in the worst case possible) to process n elements, and so on.

Big-oh notation is useful when comparing the efficiency of various sorting and searching algorithms to one another. Listed below are some algorithms and their corresponding big-oh notations:

More information on algorithms and their runtimes can be found at the NIST's Dictionary of Algorithms and Data Structures web page - http://www.nist.gov/dads/

Source for this writeup: Barron's How to Prepare for the AP Computer Science Advanced Placement Examination.