There is a matrix A, calculate An.

There are two ways to go about this problem, matrix multiplication or the method I will outline here. You will have to make a decision about which way is faster, this should be faster for small symmetric matricies.

This method will work for real symmetric matrices, complex and non-symmetric matrices may not work.
From Brontosaurus: "It will work for Hermitian matrices, i.e. the conjugate of the transpose is equal to the matrix".

Say we have a matrix A.

Calculate the eigenvalues and hence the eigenvectors of the matrix A.

Normalize (or normalise) the eigenvectors.
Basically, make it so all the vectors' lengths are 1 by multiplying by a scalar. Where for a vector x length = x.x

With the eigenvectors e1, e2 . . . en, and eigenvalues λ1, λ2 . . . λn.

From the eigenvectors create a matrix E such that:

( .  .  .  . )
E = ( e1 e2 .. en )
( .  .  .  . )
Calculate another matrix, E -1. If the matrix is real and symmetric E -1 = ET, otherwise you will have to work out E -1.

Create another matrix D such that:
( λ1 .. ..  )
D = ( .. λ2 ..  )
( .. .. λn  )
Now, A = EDE -1 (I won't go into the proof).
Now, say calculate A2.

A2 = E D2 E -1.

Since D is such a simple matrix all the hard work is multiplying D2 by E an then E -1.

N.B. You cannot say EE -1 = Identity, since matrix multiplication is not commutative, i.e. AB ≠ BA (well, not always).

This may be better explained by an example, so here goes...

We have a nice symmetric matrix A (makes the maths easier).
( 1 2 )
A = ( 2 1 )

If has eigenvectors, then there exists an x and eigenvalue λ such that Ax = λx.
Re-arranging:

Ax - λ x = 0
( A - λ I )x = 0 (where I is the identity matrix)

If the above is true then the determinant of ( A - λ I ) = 0.
| (1-λ)  2   |
|   2  (1-λ) | = 0

(1-λ)2 - (2 * 2) = 0
(1-λ) = ± 2

Giving eigenvalues,

λ1 = -1
λ2 =  3

Now, to get the eigenvectors,

Substituting λ1 into ( A - λ I )

( 2  2 ) (e11)   ( 0 )
( 2  2 ) (e12) = ( 0 )

Giving, e1 = ( 1  )
( -1 )

Substituting λ2 into ( A - λ I )

( -2  2 ) (e21)   ( 0 )
(  2 -2 ) (e22) = ( 0 )

Giving, (e2) = ( 1 )
( 1 )

Phew, now, I'll only do the short check on these eigenvectors.

e1.e2 = Cos Θ / |e1| |e2|

e1 . e2 = (1*1) + (1*-1) = 0 = Cos Θ
Θ = 90o as expected.

Now, to normalize (or normalise)

For e1, length = √2
so now, e1 = ( 1  ) / √2
( -1 )

For e2, length = √2
so now, e2 = ( 1 ) / √2
( 1 )

Now, to construct our matrices, E E -1 and D

(  1/√2  1/√2 )
E = ( -1/√2  1/√2 )

(  1/√2 -1/√2 )
E -1 = ET  = (  1/√2  1/√2 )

( -1  0 )
D = (  0  3 )

Phew, all done, now just a check.

A = E D E -1

(  1/√2  1/√2 ) ( -1  0  )
A = ( -1/√2  1/√2 ) (  0  3  ) E -1

( -1/√2  3/√2 ) (  1/√2 -1/√2 )
A = (  1/√2  3/√2 ) (  1/√2  1/√2 )

( (-1/2 + 3/2) ( 1/2 + 3/2) )   ( 1 2 )
A = ( ( 1/2 + 3/2) (-1/2 + 3/2) ) = ( 2 1 ) As expected.

Hurrah ! Now, use this method for big values of n !
Hope that you find this method useful, and sorry about the notation, HTML doesn't make it easy for me.
I did this (correctly) on paper before hand, please inform me of any errors I made typing this up.

Thanks to Brontosaurus who pointed this out: "2 vectors are orthonormal if their scalar product is 0 and they both have norm 1", this:"if there is not an orthonormal basis of eigen vectors then you will have to invert E", and this: "Your method is sure to work for any real symmetric (or complex hermitian) matrix, in other cases you may have to invert E".

Log in or register to write something here or to contact authors.