big-O: O(...)
The theoretical upper limit of execution time (complexity) of an algorithm given the problem size n.

quicksort runs in O(n log n) time. bubble sort runs in O(n2). To sort 1,000,000 numbers, quicksort will take about 6,000,000 steps, while bubble sort will take 1,000,000,000,000.

The ideal algorithm is O(1), which means that it takes a constant time to run regardless of the size of the data structure. An example of O(1) is insertion of a node to the start of a linked list. Hashtable lookups run in O(1).

The next best thing is O(log N), which grows slowly as the data set gets larger. The increase of 1,000 to 1,000,000 items is only a 2x increase in run time. Examples of O(log N) are searches on sorted data structures and operations on binary trees.

O(N) are very common. These algorithms take proportional time to the size of the data. Examples include appending to the end of a linked list, or searching an unsorted data structure.

O(N Log N) are the better sorting algorithms. Quicksort and Mergesort are examples. Most algorithms that take a 'divide and conquer' strategy are O(N Log N).

O(N2) begins the downward trend. With the data set going from 1,000 to 1,000,000 elements, it takes 1,000,000 times as long to complete. The slower sorting algorithms are O(N2), insert sort and select sort.

O(N3) and above are never to be used except in an emergency. Even worse are algorithms with non-polynomial complexity such as O(2N). Most O(2N) are brute force, badly designed, or implementing a problem that is inherently very slow. To go from a data set of 1 to 1,000 results in 299 times more operations, or about 10300. Also avoid O(N!) algorithms.