In linear algebra, the singular value decomposition (SVD) is a very useful mathematical tool in analyzing the properties of a matrix.

Fundamental to the idea of the SVD is the theorem which states that every matrix (say the m×n matrix A) can be expressed as a product of three matrices:

Here, U is an m×m orthogonal matrix, Σ is an m×n diagonal matrix, and V is an n×n orthogonal matrix (the 'T' superscript denotes the transpose, as usual). This decomposition A into U, Σ, and V is called the singluar value decomposition of A.

The diagonal values of Σ are called the singular values of A. If we define p = min(m, n) we can call them σ1, σ2, ..., σp from the top left to the bottom right of Σ. They are always ordered from largest to smallest. If r is the rank of A, then there are exactly r non-zero singular values in the SVD for A. In addition, the value of the 2-norm of A is equal to σ1.

The columns of V -- designated v1, v2, ..., vn -- are called the right singular vectors of A. Similarly, the columns of U -- designated u1, u2, ..., um -- are called the left singular vectors of A. It turns out that the vectors vr+1, vr+2, ..., vn is a basis for the null space of A. The vectors u1, u2, ..., ur is a basis for the range of A

The SVD has the following geometric interpretation. Recall that an orthogonal matrix is either a rotation matrix or a reflection matrix. Also, a diagonal matrix serves as a non-uniform scale and -- if the matrix is non-square -- a change in the number of dimensions. Now, if we multiply A from the right by a column vector x, by the SVD theorem we have that

Ax = UΣVTx = U(Σ(VTx)).
This means that multiplication of a vector by any1 matrix A is equivalent to the following sequence of transformations on the vector:
  1. Rotation or reflection around the origin
  2. Non-uniform scale around the origin and possible change in the number of dimensions
  3. Rotation or reflection around the origin (not necessarily the same as in step 1)
This also means that the unit sphere in n dimensions will be transformed by A into an ellipse in m (or less) dimensions. In this sense, the left singular vectors represent the axes of the ellipse (major and minor axes if in 2D) in the range of A. The right singular vectors represent the unit vectors which map to the left singluar vectors.

The singular value decomposition is not unique. One source of ambiguity lies in the fact that the singular vectors may be flipped in right-left pairs without changing A. A second source arises from the fact the axes of the ellipse are undefined if the ellipse is in fact a circle. To see this, think of the SVD for the identity matrix.

A useful application of the SVD is to define a matrix called the pseudoinverse of A. You can also use it to find a least-squares solution to a homogeneous system of linear equations.

1 With compatible dimensions, of course.

Log in or register to write something here or to contact authors.