A measure for the efficiency of algorithms

Let A be an algorithm that solves a problem P of size n in t steps. The time complexity function T(n) of P is defined as the mathematical function that maps the non-negative integer n onto t for A. The time complexity of P is defined as the mathematical order of the time complexity function.
What does that mean?

In English: when we are presented with a problem involving n elements, the time complexity function tells us how much compuational steps we will have to perform to solve that problem. Often, the exact amount of steps involved is of no paricular interest to us, but the order of magnitude of the problem is. Such a concept is extremely important in the development of an algorithm that needs to be executed on a computer. There are often many ways to solve a problem and one would strive to find an algorithm for which the order of the time complexity function, called the time complexity, is minimal. If the order of the function becomes large, a slightly bigger version of the problem (composed of only a few more elements) might require an enormous amount of extra computations. Those take up processor time, hence the name 'time complexity'. In even simpler terms: it shows us how fast the amount of time required to solve a problem grows when the size of the problem grows.

What is the use of all this?

We will look at three algorithms as exemplary cases. One algorithm solves a simple problem to get the idea of what time complexity is and how you calculate it. The other two are different takes on a more complicated problem to see how we can use the concept of time complexity to compare the efficiency of two algorithms.

First, we take the problem of performing a scalar multiplication on a vector of dimension n as an example. A vector is nothing more than a list of numbers. The dimension is the amount of numbers in the list and scalar multiplication involves multiplying each number in the list with one particular number. We will call that number x here. Let's assume that a computer executes the following algorithm to perform the scalar multiplication:

  1. Set the current position number to 1 (or if you are a programmer, to 0:).
  2. Multiply the number at the current position in the vector by x.
  3. Check if we are at the end of the vector.
  4. If not, increment the current position and go to step 2.

If the vector has one entry, i.e. if the dimension is 1, the computer would perform steps 1 through 3 only, so the value of the time complexity function would be 3 for n = 1 (assuming one line in our algorithm is exactly one step). When the vector has 2 elements however, we would have to do at least 3 more steps (4, 2 and 3). In fact, for every extra entry in the vector we would have to do 3 extra steps. So, this leads us to conclude that the time complexity function of this algorithm is given by T(n): 3n. The order of this function is n, so the time complexity of P is O(n) (linear order).

Next, let us consider the problem A of sorting the numbers in a vector or a list so that the lowest numbers are at the top. For convenience we will assume that the vector contains only distinct numbers. The algorithm we will use here is the bubble sort algorithm.

This algorithm works by comparing two adjacent entries and swapping them if they are in the wrong order. This allows the lowest values to 'bubble' up to the top of the list one by one. After traversing the list for the first time, the lowest value is at the top. Because of this, we do not have to include the top element in the sort the next time. Instead, we can sort the n - 1 elements in the rest of the list. After two iterations, two elements will be in place, after three, three elements, etc. For a more detailed explanation I refer to the bubble sort node.

In the worst case scenario all of the elements are in exact reverse order. This means that the first iteration of a list of length n requires (n - 1) compares and (n - 1) swaps. Further, we need to check if we are at the end of the list each of the n times. Thus, the complexity function of one traversal is Tt(p): 2(n - 1) + n or simply Tt(n): 3n - 2. We will have to do n traversals in total, where the length of the traversal will be one element shorter every time. To find the total function T(n) we will sum Tt(n) over the range 1 to n, producing T(n): 1.5n^2 - 0.5n. The order of this function is O(n^2) so concisely that is the time complexity of the bubble sort algorithm (quadratic order).

Finally, we will explore one last example that will clearly show the importance of the concept of time complexity. We will examine another algorithm for sorting a list of numbers and we will prove that it is faster than bubble sorting! The algorithm is called the merge sort. In short, it works by dividing the list we have into two separate lists. We then sort each of the separate lists individually. Next, we can easily construct the full ordered list by properly interleaving the two ordered sublists. This is called a merge of lists. It is easily checked that merging two sublists of a list of size n will take n - 1 comparisons. The time complexity function of the merge is then given by Tm(n): n - 1.

Next, the sorting of the two sublists is also done using a merge sort! They are also split into two sublists each and merged after mergesorting their sublists, and so on. In other words, the merge sort is a recursive algorithm.

Each time we divide all lists into sublists, we will have to merge all of them again later. If we have, as an example, a list of eight numbers, the merge from the two four-element sublists will cost us, as stated earlier, 8 - 1 = 7 comparisons. If we want to merge four sublists of two elements into two sublists of four elements, that too costs us 7 comparisons. In general, each time we cut a layer of lists up into sublists we will require n - 1 operations to merge them later. So, for the time complexity function of the merge sort, we will have to know how many times we can divide the lists until they cannot be divided anymore. It is readily checked that this is log n times, with log base 2 , so the time complexity function for merge sort is T(n): log n * (n-1), and concisely the time complexity itself is O(n * log n).

This is an interesting result because it shows that the merge sort is faster than the bubblesort! For very large lists, n * log n is much smaller than n^2, or it is said that n^2 = o(n * log n ), and thus the merge sort can sort a big list in considerably less time than a bubble sort can, even when implemented on the same hardware.

In conclusion, the concept of time complexity of algorithms is very useful in examining applications where run time is a limiting factor. Often, checks must be performed to see if a computer can be expected to finish a given task in an acceptable time frame at all, or to check if the problem is not an NP complete problem.

Log in or register to write something here or to contact authors.