In data communications and storage technology,
Error Correcting Codes (ECC) are implemented
to add the possibility to reconstruct
the original data, should it be damaged
during a transfer (like a noisy modem line),
or directly on the storage medium (such as
a scratch on a CD).
All of the numerous algorithms available rely on some
redundancy in the transferred data.
A simple, effective, but quite expensive (with regards
to bandwidth and storage) method is sending each bit three times.
An example:
01011 => 000 111 000 111 111 => noise => 001 111 010 110 011
Each "
triplet" will then be
decoded to the most "
dominant"
bit:
001 => 0
111 => 1
010 => 0
110 => 1
011 => 1
Allright, let's try a more efficient example.
A simple, yet often sufficient
algorithm is the VRC+LRC (Vertical + Longitudinal
Redundancy Check). It is basically two separate checksums,
one "horizontal" and one "vertical",
which can be used to pinpoint a single bit error in a
block. Consider a data block of six 7-bit words:
1 0 1 1 0 1 1
1 1 0 1 0 1 1
0 0 1 1 1 0 1
1 1 1 1 0 0 0
1 0 0 0 1 0 1
0 1 0 1 1 1 1
We add the
parity bits (1 if the number
of 1s in a string is odd, and 0 otherwise.)
to each word, and a parity "row" at the
bottom:
1 0 1 1 0 1 1 1
1 1 0 1 0 1 1 1
0 0 1 1 1 0 1 0
1 1 1 1 0 0 0 0
1 0 0 0 1 0 1 1
0 1 0 1 1 1 1 1
0 1 1 1 1 1 1 0
Now, should one of the bits be
inverted by
line noise, we will end up with one
parity error
in the vertical column, and one in the row
at the bottom. And that's all we need to
correct it! This simple
scheme will only
allow one error per row/column pair,
though, but in many cases it will suffice.
ECCs are getting quite
ubiquitous; most parallel ports in
modern computers can do ECC, RAM chips
which employ ECC are available, and
it has been used on audio CDs for
years.