The transpose of a matrix is the matrix found by switching rows for columns: more formally, if A = [aij]: m*n then AT = [aji]: n*m (that is, the entry in the ith row, jth column of A becomes the entry in the jth row, ith column of AT).
The following properties hold for the transpose:
(i) (A+B)T = AT + BT (the transpose of the sum is the sum of the transposes)
(ii) (AB)T =BTAT
(iii)ATT = A
The third property is trivially true, and the first is fairly obvious when you consider the elements:
(A + B)T = ( [aij]: m*n + [bij]: m*n)T
= ( (aij + bij): m*n )T
= [aji + bji]: n*m
= [aji]: n*m + [bji]: n*m
= AT + BT
However, the second requires some more thought. Consider the i,jth element of (AB)T, which equals the j,ith element of AB (by definition of transpose). This is found by taking the scalar (dot) product of the jth row of A and the ith column of B (usual process of matrix multiplication). However, this is identical to the scalar product of the jth column of AT and the ith row of BT, namely (BTAT)ij. So property (ii) holds. Note that the order matters- it may not even be possible to define the product ATBT but given AB (ie A: m*n, B: n*p to give an m*p matrix) can be calculated, BTAT (p*n multiplying n*m to give p*m as expected) can be.
A matrix is described as symmetric if AT=A: switching the rows and columns has no effect. Similarly, if AT= -A
(exchanging rows for columns effectively switches signs) then A is described as skew-symmetric. If A is a square (i.e. m=n, same number of rows as columns) matrix, then A + AT is always symmetric:
(A + AT)T = AT + ATT by property (i)
= AT + A by property (iii)
=(A + AT) so the definition of symmetric is met.
and by similar logic, (A - AT) can be shown to be skew-symmetric. Thus using the properties of transpose matrices, all square matrices can be expressed uniquely in terms of a sum of a symmetric matrix and a skew-symmetric matrix.
Other useful things to know: