...

Special types of square matrix

by taratuta

on
Category: Documents
18

views

Report

Comments

Transcript

Special types of square matrix
MATRICES AND VECTOR SPACES
may be shown that the rank of a general M × N matrix is equal to the size of
the largest square submatrix of A whose determinant is non-zero. Therefore, if a
matrix A has an r × r submatrix S with |S| = 0, but no (r + 1) × (r + 1) submatrix
with non-zero determinant then the rank of the matrix is r. From either definition
it is clear that the rank of A is less than or equal to the smaller of M and N.
Determine the rank of the matrix

1
A= 2
4
1
0
1
0
2
3

−2
2 .
1
The largest possible square submatrices of A must be of dimension 3 × 3. Clearly, A
possesses four such submatrices, the determinants of which are given by
1 1 0 1 1 −2 2 0 2 = 0,
2 0
2 = 0,
4 1 3 4 1
1 1
2
4
0
2
3
−2
2
1
= 0,
1
0
1
0
2
3
−2
2
1
= 0.
(In each case the determinant may be evaluated as described in subsection 8.9.1.)
The next largest square submatrices of A are of dimension 2 × 2. Consider, for example,
the 2 × 2 submatrix formed by ignoring the third row and the third and fourth columns
of A; this has determinant
1 1 2 0 = 1 × 0 − 2 × 1 = −2.
Since its determinant is non-zero, A is of rank 2 and we need not consider any other 2 × 2
submatrix. In the special case in which the matrix A is a square N ×N matrix, by comparing
either of the above definitions of rank with our discussion of determinants in
section 8.9, we see that |A| = 0 unless the rank of A is N. In other words, A is
singular unless R(A) = N.
8.12 Special types of square matrix
Matrices that are square, i.e. N × N, are very common in physical applications.
We now consider some special forms of square matrix that are of particular
importance.
8.12.1 Diagonal matrices
The unit matrix, which we have already encountered, is an example of a diagonal
matrix. Such matrices are characterised by having non-zero elements only on the
268
8.12 SPECIAL TYPES OF SQUARE MATRIX
leading diagonal, i.e. only elements Aij with

1 0
A= 0 2
0 0
i = j may be non-zero. For example,

0
0 ,
−3
is a 3 × 3 diagonal matrix. Such a matrix is often denoted by A = diag (1, 2, −3).
By performing a Laplace expansion, it is easily shown that the determinant of an
N × N diagonal matrix is equal to the product of the diagonal elements. Thus, if
the matrix has the form A = diag(A11 , A22 , . . . , ANN ) then
|A| = A11 A22 · · · ANN .
(8.63)
Moreover, it is also straightforward to show that the inverse of A is also a
diagonal matrix given by
1
1
1
,
,...,
A−1 = diag
.
A11 A22
ANN
Finally, we note that, if two matrices A and B are both diagonal then they have
the useful property that their product is commutative:
AB = BA.
This is not true for matrices in general.
8.12.2 Lower and upper triangular matrices
A square matrix A is called lower triangular if all the elements above the principal
diagonal are zero. For example, the general form for a 3 × 3 lower triangular
matrix is


0
0
A11
A =  A21 A22
0 ,
A31 A32 A33
where the elements Aij may be zero or non-zero. Similarly an upper triangular
square matrix is one for which all the elements below the principal diagonal are
zero. The general 3 × 3 form is thus


A11 A12 A13
A =  0 A22 A23  .
0
0 A33
By performing a Laplace expansion, it is straightforward to show that, in the
general N × N case, the determinant of an upper or lower triangular matrix is
equal to the product of its diagonal elements,
|A| = A11 A22 · · · ANN .
269
(8.64)
MATRICES AND VECTOR SPACES
Clearly result (8.63) for diagonal matrices is a special case of this result. Moreover,
it may be shown that the inverse of a non-singular lower (upper) triangular matrix
is also lower (upper) triangular.
8.12.3 Symmetric and antisymmetric matrices
A square matrix A of order N with the property A = AT is said to be symmetric.
Similarly a matrix for which A = −AT is said to be anti- or skew-symmetric
and its diagonal elements a11 , a22 , . . . , aNN are necessarily zero. Moreover, if A is
(anti-)symmetric then so too is its inverse A−1 . This is easily proved by noting
that if A = ±AT then
(A−1 )T = (AT )−1 = ±A−1 .
Any N × N matrix A can be written as the sum of a symmetric and an
antisymmetric matrix, since we may write
A = 12 (A + AT ) + 12 (A − AT ) = B + C,
where clearly B = BT and C = −CT . The matrix B is therefore called the
symmetric part of A, and C is the antisymmetric part.
If A is an N × N antisymmetric matrix, show that |A| = 0 if N is odd.
If A is antisymmetric then AT = −A. Using the properties of determinants (8.49) and
(8.51), we have
|A| = |AT | = | − A| = (−1)N |A|.
Thus, if N is odd then |A| = −|A|, which implies that |A| = 0. 8.12.4 Orthogonal matrices
A non-singular matrix with the property that its transpose is also its inverse,
AT = A−1 ,
(8.65)
is called an orthogonal matrix. It follows immediately that the inverse of an
orthogonal matrix is also orthogonal, since
(A−1 )T = (AT )−1 = (A−1 )−1 .
Moreover, since for an orthogonal matrix AT A = I, we have
|AT A| = |AT ||A| = |A|2 = |I| = 1.
Thus the determinant of an orthogonal matrix must be |A| = ±1.
An orthogonal matrix represents, in a particular basis, a linear operator that
leaves the norms (lengths) of real vectors unchanged, as we will now show.
270
8.12 SPECIAL TYPES OF SQUARE MATRIX
Suppose that y = A x is represented in some coordinate system by the matrix
equation y = Ax; then y|y is given in this coordinate system by
yT y = xT AT Ax = xT x.
Hence y|y = x|x, showing that the action of a linear operator represented by
an orthogonal matrix does not change the norm of a real vector.
8.12.5 Hermitian and anti-Hermitian matrices
An Hermitian matrix is one that satisfies A = A† , where A† is the Hermitian conjugate discussed in section 8.7. Similarly if A† = −A, then A is called anti-Hermitian.
A real (anti-)symmetric matrix is a special case of an (anti-)Hermitian matrix, in
which all the elements of the matrix are real. Also, if A is an (anti-)Hermitian
matrix then so too is its inverse A−1 , since
(A−1 )† = (A† )−1 = ±A−1 .
Any N × N matrix A can be written as the sum of an Hermitian matrix and
an anti-Hermitian matrix, since
A = 12 (A + A† ) + 12 (A − A† ) = B + C,
where clearly B = B† and C = −C† . The matrix B is called the Hermitian part of
A, and C is called the anti-Hermitian part.
8.12.6 Unitary matrices
A unitary matrix A is defined as one for which
A† = A−1 .
†
(8.66)
T
Clearly, if A is real then A = A , showing that a real orthogonal matrix is a
special case of a unitary matrix, one in which all the elements are real. We note
that the inverse A−1 of a unitary is also unitary, since
(A−1 )† = (A† )−1 = (A−1 )−1 .
Moreover, since for a unitary matrix A† A = I, we have
|A† A| = |A† ||A| = |A|∗ |A| = |I| = 1.
Thus the determinant of a unitary matrix has unit modulus.
A unitary matrix represents, in a particular basis, a linear operator that leaves
the norms (lengths) of complex vectors unchanged. If y = A x is represented in
some coordinate system by the matrix equation y = Ax then y|y is given in this
coordinate system by
y† y = x† A† Ax = x† x.
271
Fly UP