The radix sort is not a comparison sort. It works in a very similar manner to the punch card sorting equipment used. To get a rough idea of how it works, consider the folowing example.

Suppose we have a set of 3-digit numerical keys, where each 3-digit key is punched on a separate card in a deck of cards. The deck of cards to be sorted is placed in a bin on the sorting machine, and cards to be sorted are fed from the bottom of the deck. The machine can be set to sort based on either the first, the middle, or the last digit of the key. Suppose C is the card on the bottom of the deck that is currently being read by the sorting machine. If the digit currently selected in C is a 0, then C is fed mechanically to a bin where it drops down on top of the pile of cards containing 0's digits. If C's currently selected digit is d (which is 0 through 9), it drops on top of the respective digit-d-pile.

After the entire deck has been processed, the piles of cards in the digit bins are stacked to reconstitute a new deck, with the 9's pile of the bottom, the 8's pile next to the bottom, and so on. This deck is fed through the sorting machine another time, with a different digit selected for sorting. In this manner, the entire pile can be sorted in 3 tries. For a key with N digits, therefore, it takes N passes to sort the algorithm.

Back to Sorting Algorithms.

### Radix sort is a generalized quick sort.

Quick sort partitions a vector into two bins: elements greater than X and less than X. It then sorts each bin recursively. Radix sort, which is a multiple pass bin sort or bucket sort, differs from quick sort only in that radix sort uses more than two bins and makes all the bins on a given pass equal in size, to reduce comparisons to shifts and vector indexing. For a keyspace of size m and using b bins, radix sort makes ceil(logb(m)) passes over the entire vector of n elements to obtain performance that scales at O(n*log(m)), equivalent to that of quick sort in cases where m = k*n.

### So why not just use quick sort?

Quick sort doesn't buy you any parallelism. A sorter implemented in hardware (such as a punch card sorter, a radish sorter, or the original PlayStation's graphics hardware) will often exploit the parallelism that radix sort allows, especially with a large number of bins (256?). When sorting integers, fixed-point numbers, or IEEE standard floating-point numbers, you have a fixed keyspace of 232 or 264, and the time complexity formula reduces to O(n).

Radix sort also has a property that you don't need stack space for recursion. Knuth has shown that if you sort based on the least significant digit and then move toward the most significant digit, everything ends up sorted in the end if you've implemented your partitioning in a stable manner.

Of course, when sorting large data types based on a key field, it helps to sort references to the data (such as indices or pointers) so as to avoid the overhead of needless copies.

© 2001 Damian Yerrick. Verbatim copying and redistribution are permitted.

A C implementation of a Radix Sort

```#include <limits.h>
#include <math.h>
#include <stdlib.h>
#include <string.h>

int rxsort(int *data, int size, int passes, int base)
{
int *counts, *temp;
int index, pval, i, j, n;

if ((counts = (int *)malloc(base * sizeof(int))) == NULL)
{
return -1;
}

if ((temp = (int *)malloc(size * sizeof(int))) == NULL)
{
return -1;
}

for (n = 0; n < passes; n++)
{
for (i = 0; i < base; i++)
counts[i] = 0;
pval = (int)pow((double)base, (double)n);

for (j = 0; j < size; j++)
{
index = (int)(data[j] / pval) % base;
counts[index] = counts[index] + 1;
}
for (i = 1; i < base; i++)
counts[i] = counts[i] + counts[i - 1];
for (j = size - 1; j >= 0; j--)
{
index = (int)(data[j] / pval) % base;
temp[counts[index] - 1] = data[j];
counts[index] = counts[index] - 1;
}
memcpy(data, temp, size * sizeof(int));
}
free(counts);
free(temp);

return 0;
}
```

Log in or register to write something here or to contact authors.