The pseudoinverse of a
matrix can be computed using its
singular value decomposition. Using the
SVD, an
m×
n matrix
A can be expressed as
A = UΣVT
Therefore, using the properties of
orthogonal matricies and the matrix
inverse, the inverse of
A is given by:
A -1 = (VT)-1Σ-1U -1 = VΣ-1UT
Of course, the
formula above only works if Σ has an inverse. Σ may fail to have an inverse if it is
singular or non-square.
To deal with this situation, we define a new n×m matrix called Σ† by the equation
Σ† = Diagn×m(1/σ1, 1/σ2, ..., 1/σr , 0, ..., 0)
where
r is the
rank of
A. In other words, Σ
† is an
n×
m diagonal matrix which has the
reciprocals of the non-zero
singular values of
A along its
diagonal.
We then define the pseudoinverse of A as the n×m matrix A† which is given by the equation
A† = VΣ†UT.
It is easy to show that this
definition satisfies the
axioms given in the previous
writeup. To see this, first note that
Σ†Σ = Diagn×n(1, 1, ..., 1, 0, ..., 0)
and
ΣΣ† = Diagm×m(1, 1, ..., 1, 0, ..., 0)
where both matricies contain
r 1's along their diagonals. Then use the relevant definitions.
The pseudoinverse of a square, non-singular matrix is equal to (surprise!) its inverse.
You can compute the minimum-norm least-squares solution to the system of linear equations Ax = b using the pseudoinverse. In other words, the column vector x with the shortest length which minimizes the expression
|| Ax - b ||
is denoted
x* and is given by
x* = A†b.