Example Code:
procedure selection( var a : anarray; N : integer);

 var i, j, min : integer;

begin
 
 for i := 1 to N-1 do
  begin

   {Elements from 1 to i-1 are in the correct place}

   min := i;
   for j := i+1 to N do
    if a[j] < a[min] then min := j;

   swap(a[i], a[min]);

   {Elements from 1 to i are in the correct place}

  end;

end;

Back to Sorting Algorithms

This algorithm is of order n2; that is, at worst it goes through the square of the number of instructions. For those of you who understand c:

void sort (int sortme[], int items) {
     int i, j, smallest, index;

     for (i = 0; i > items - 1; i++) {
          smallest = sortme[i];
          index = i;
          for (j = 0; j < index; j++) {
               if (sortme[j] < smallest) {
                    smallest = sortme[j];
                    index = j;
               }
          }
          if (index != i) {
               sortme[index] = sortme[i];
               sortme[i] = smallest;
          }
     }
}



Selection sort is a sorting algorithm that nearly every student of computer science knows. Its name comes from the fact that on each pass, the algorithm selects and processes the largest element. It has time complexity Θ(n2). Because of this, other algorithms like quicksort and merge sort, with time complexity O(n log n) are often used over it. It is stable, in-place, requires n exchanges and n2 comparisons.

The selection sort algorithm is:

  1. Find the largest value V of list L.
  2. Swap V with the value of last unprocessed element of L.
  3. Repeat until all of elements of L have been exhausted.

NB: the two implementations above work by finding the smallest value and swapping it with the first unprocessed element of the list. This difference is trivial (almost.. but we'll get to that).

An example should drive the point home. Consider the following list of integers as the sequence to be sorted.

2 1 7 9 4 2 6

1. The first step is to find the largest value of the list.

2 1 7 9 4 2 6

2. Now swap it with the last element of the list.

2 1 7 6 4 2 9

3. Ignore the last element of the list.

2 1 7 6 4 2 | 9

Repeat the process:

2 1 7 6 4 2 | 9  (find)
2 1 2 6 4 7 | 9  (swap)
2 1 2 6 4 | 7 9  (ignore)

2 1 2 6 4 | 7 9  (find)
2 1 2 4 6 | 7 9  (swap)
2 1 2 4 | 6 7 9  (ignore)

2 1 2 4 | 6 7 9  (find)
2 1 2 4 | 6 7 9  (swap)
2 1 2 | 4 6 7 9  (ignore)

2 1 2 | 4 6 7 9  (find)
2 1 2 | 4 6 7 9  (swap) *
2 1 | 2 4 6 7 9  (ignore)

2 1 | 2 4 6 7 9  (find)
1 2 | 2 4 6 7 9  (swap)
1 | 2 2 4 6 7 9  (ignore)

1 2 2 4 6 7 9

In this example, stability was not maintained. Notice that the two elements with value 2 swapped places. This happened at the step marked with the asterisk. With this algorithm, the simple fix is to find the element with the largest value that is closest to the end. This is done by using >= instead of > in the comparison between the current element and the largest element. Alternatively, you can use the smallest/first element swap used by yossarian and flamingweasel in their implementations above. This is the sole reason why I chose to sort by finding the largest value in this writeup: so you would become familiar with the idea that simple things can have unforeseen consequences on an otherwise working algorithm.

Selection sort has time complexity Θ(n2) in all cases. In layman's terms, this means that selection sort will more or less require the same amount of time regardless of the order or type of data. Conversely, another sorting algorithm, bubble sort, requires less and less time as the sortedness of the data increases. The n2 implies that as the list of items to be sorted grows, the time required to sort them grows quadratically.

This algorithm has space complexity O(1) in all cases. This means the memory required by the algorithm is constant regardless of how much data is processed. Usually this memory is in the form of two loop variables and another variable to keep track of which element is the largest. Some naïve implementations of the algorithm have space complexity O(n) because they require enough extra memory to hold another copy of the input list. The naïve algorithm is then to find the smallest element of the input list, remove it, and then place it at the end of the output list. This extra memory is unnecessary as the algorithm can be implemented to sort in-place.

A very common improvement to selection sort produces an algorithm known as heapsort. In a heapsort, elements are first put into a heap and then the basic selection sort process is used. It runs in O(n log n) time. A different improvement produces the algorithm bingo sort. It is the same as selection sort, except when there are multiple elements with the same value. In this situation, all of those elements are moved in the same pass.


Sources:
http://www.nist.gov/dads/HTML/selectionsrt.html

Log in or register to write something here or to contact authors.