Shannon-Fano coding is a method of designing efficient codes for sources with known symbol probabilities. Codes produced using this method are sub-optimal compared with Huffman codes, but this method is easier to explain and perform by hand.

The method described was developed independently by Claude Shannon and Simon Fano in 1949.

### Source Coding

For example, assume we have six source symbols A, B, C, D, E, and F, with respective probabilities 1/2, 1/3, 1/12, 1/15, 1/120, and 1/120. We could encode these using a simple binary code: let A be 000, B 001, C 010, D 011, E 100, and F 101. However, notice that A occurs as often as the other codewords combined, and that E and F hardly ever occur. It would be far more efficient to have a shorter codeword for A, and longer codewords for E and F.

### Shannon-Fano Procedure

The Shannon-Fano method of encoding these symbols in binary proceeds as follows. Write the source symbols in a column in descending probability order. Choose a point between two symbols such that the summed probabilities above and below are as close to equal as possible, and draw a horizontal line. Start a new column to the right of the first, writing zeros above the line and ones below. The diagram will now look like this:

1/2 | 0
---------
1/3 | 1
1/12 | 1
1/15 | 1
1/120 | 1
1/120 | 1

The source symbols above the line have their codewords assigned, and those below remain uncoded. Repeat the process: choose a point between two of the remaining symbols which equally weights probabilities, draw a horizontal line, and write zeros above and ones below:

1/2 | 0
---------
1/3 | 1 0
-----------
1/12 | 1 1
1/15 | 1 1
1/120 | 1 1
1/120 | 1 1

This process continues until you have only one source symbol below the line. In this case, the final diagram should look like:

1/2 | 0
---------
1/3 | 1 0
-----------
1/12 | 1 1 0
-------------
1/15 | 1 1 1 0
---------------
1/120 | 1 1 1 1 0
-----------------
1/120 | 1 1 1 1 1

### Measuring Efficiency

Now we have a series of codewords for our symbols: A is 0, B is 10, C is 110, D is 1110, E is 11110, and F is 11111. Weighting by probability, we can calculate the average length of the code:

L_{ave} = 1/2 * 1 + 1/3 * 2 + 1/12 * 3 + 1/15 * 4 + 1/120 * 5 + 1/120 * 5

L_{ave} = **1.767** bits

This, compared with the simplistic 3-bit binary code we first tried, uses only 60% of the output symbols. We can calculate the efficiency of the code by comparing the average length with the minimum length. Minimum length is defined as the entropy of the source divided by log to the base 2 of the number base of the output; in this case, with binary output, it is simply equal to the entropy of the source:

H(X) = - Σ p_{k }log_{2}(p_{k})

H(X) = - 1/2 log_{2}(1/2) - 1/3 log_{2}(1/3) - 1/12 log_{2}(1/12) - 1/15 log_{2}(1/15) - 1/120 log_{2}(1/120) - 1/120 log_{2}(1/120)

H(X) = **1.702** bits

Therefore the minimum possible length for a binary code of these symbols is 1.702 bits on average. This means that the 3-bit binary code has an efficiency of 1.702/3 = 56.7%, while the Shannon-Fano code has efficiency of 1.702/1.767 = 96.3%.

References:

"Data Communications and Networks: An Engineering Approach", Irvine/Harle, Wiley, 2000