Radix sort is a generalized quick sort.

Quick sort partitions a vector into two bins: elements greater than X and less than X. It then sorts each bin recursively. Radix sort, which is a multiple pass bin sort or bucket sort, differs from quick sort only in that radix sort uses more than two bins and makes all the bins on a given pass equal in size, to reduce comparisons to shifts and vector indexing. For a keyspace of size m and using b bins, radix sort makes ceil(logb(m)) passes over the entire vector of n elements to obtain performance that scales at O(n*log(m)), equivalent to that of quick sort in cases where m = k*n.

So why not just use quick sort?

Quick sort doesn't buy you any parallelism. A sorter implemented in hardware (such as a punch card sorter, a radish sorter, or the original PlayStation's graphics hardware) will often exploit the parallelism that radix sort allows, especially with a large number of bins (256?). When sorting integers, fixed-point numbers, or IEEE standard floating-point numbers, you have a fixed keyspace of 232 or 264, and the time complexity formula reduces to O(n).

Radix sort also has a property that you don't need stack space for recursion. Knuth has shown that if you sort based on the least significant digit and then move toward the most significant digit, everything ends up sorted in the end if you've implemented your partitioning in a stable manner.

Of course, when sorting large data types based on a key field, it helps to sort references to the data (such as indices or pointers) so as to avoid the overhead of needless copies.

(return to) Sorting Algorithms

© 2001 Damian Yerrick. Verbatim copying and redistribution are permitted.