The

Gauss-

Seidel method is a remarkably easy to implement

Iterative method for solving systems of linear equations based on the

Jacobi iteration method. It is easier to implement (can be done in only 10s of lines of C code) and it is generally faster than the

Jacobi iteration, but its convergence speed still makes this method only of theoretical interest.

*If you are already familiar with Jacobi's iteration method
you can skip this paragraph as long as you give a glance
at the notations I've chosen.*
Suppose you wish to solve a (n,n) linear system. This system
can be written :

A * x = b

where A is a (n,n)

matrix, x and b are (n,1) column

vectors.

The Jacobi iteration method is to write the matrix A in the form

A = D + N

where D is a

diagonal matrix and N a matrix with only 0's on the
diagonal. A rewrite of the above formulae with A's decomposition
gives

(D + N) * x = b, hence D * x = b - N * x

The idea behind iteration is that you start with a random
(it should be chosen as close as possible to the solution to speed
up iteration time) estimation x

_{0} of the solution and
apply a

recursive formulae x

_{n+1} = f(x

_{n}).
The

function f() should be chosen so that the

sequence
(x

_{0}, x

_{1}, ...) has the solution for

limit.

The above formulae suggests you can compute the "new" value of
x given its "old" value. Therefore it gives the following
iteration formulae :

x_{n+1} = D^{-1} * b - D^{-1} * N * x_{n}

The interesting thing about this formulae is that D

^{-1} * b and
D

^{-1} * N are

constants throughout the iteration.

# Gauss-Seidel

In order to compute x

_{n+1} from x

_{n}, one
would probably

allocate new space where to store the elements of the
vector and compute them one by one. Then to complete the iteration,
x

_{n+1} must be copied where x

_{n} was and the
situation is ready to start over.

x_{n+1} `<-`

D^{-1} * b - D^{-1} * N * x_{n}

x_{n} `<-`

x_{n+1}

Gauss-Seidel's approach is to compute each element of x_{n+1}
and directly update x_{n} with that value. This eliminates the need
for a separate memory location. The calculation is done "in place" instead
of being copied to another location.

Let x^{k} be the k_th element of vector x :

x_{n}^{k} `<-`

(D^{-1} * b)^{k} - (D^{-1} * N * x_{n})^{k}

# Strengths and drawbacks

Gauss-Seidel iteration has the same drawbacks than Jacobi iteration, including
but not limited to :

- The matrix D is not always invertible. One
**must** check that
the values on the diagonal are non-zero before starting the iteration
because it can lead to unpredictable results.
- The spectral radius of the matrix D
^{-1} * N must
have a modulus < 1 for the iteration to work correctly.
Think of the spectral radius (the largest value of the set of eigenvalue modules) as being the greatest scalar factor
that can be applied to a vector. If it is greater than 1, iteration
on the corresponding eigenvector will result in an infinite limit.

But tests I've conducted on a fair number of

random
diagonally dominant matrices representing

linear systems show that this method is between twice or thrice faster than
the traditional

Jacobi iteration method.

Besides such matrices are frequently encountered when applying
finite difference methods, for example when trying to solve boundary value differential problems.

# Conclusion

Jacobi iteration and Gauss-Seidel iteration are good

numerical analysis
reference methods to evaluate the performance of other linear system solving
schemes. Besides Gauss-Seidel iteration turns out to be very
important in

multigrid relaxation methods.

The Gauss-Seidel method has a **faster convergence speed than Jacobi's
iteration method** (about twice faster)
and is even **easier to implement**.
It takes only tens of line
of C code to program it. This makes it a choice method for
computer scientists willing to test the validity of a model.
However I really urge them to use a better relaxation method if
they intend to do something at least a little serious because
typical convergence speed is so slow that it makes this method
**only of theoretical interest**.
It takes o(p*J^{2}) iterations to reduce the error
by a factor 10^{p} of a (J,J) linear system with
diagonally dominant matrices.

#####
Numerical Recipes in C - 19.5 Relaxation methods for boundary value problems