...

4 Chapter 4 Determinants and Eigenvalues

by taratuta

on
Category: Documents
167

views

Report

Comments

Transcript

4 Chapter 4 Determinants and Eigenvalues
4
Determinants and
Eigenvalues
4.1 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 Determinants: Advanced Results . . . . . . . . . . . . . . . . . . . . . .
4.3 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Luz M. DeAlba
Drake University
4.1
4-1
4-3
4-6
4-11
Determinants
Definitions:
The determinant, det A, of a matrix A = [ai j ] ∈ F n×n is an element in F defined inductively:
r det [a] = a.
r For i, j ∈ {1, 2, . . . , n}, the i j th minor of A corresponding to a is defined by m = det A({i }, { j }).
ij
ij
r The i j th cofactor of a is c = (−1)i + j m .
ij
ij
ij
r det A = n (−1)i + j a m = n a c for i ∈ {1, 2, . . . , n}.
ij ij
j =1
j =1 i j i j
This method of computing the determinant of a matrix is called Laplace expansion of the determinant
by minors along the i th row.
The determinant of a linear operator T : V → V on a finite dimensional vector space, V , is defined as
det(T ) = det ([T ]B ), where B is a basis for V .
Facts:
All matrices are assumed to be in F n×n , unless otherwise stated. All the following facts except those with
a specific reference
can
be found in [Lay03, pp. 185–213] or [Goo03, pp. 167–193].
a
a12
= a11 a22 − a12 a21 .
1. det 11
a21 a22
⎡
2.
3.
4.
5.
6.
⎤
a11 a12 a13
⎢
⎥
det A = det⎣a21 a22 a23 ⎦ = a11 a22 a33 + a21 a13 a32 + a31 a12 a23 − a31 a13 a22 − a21 a12 a33
a31 a32 a33
−a11 a32 a23 .
The determinant is independent of the row i used to evaluate it.
(Expansion of the determinant by minors along the j th column) Let j ∈ {1, 2, . . . , n}. Then
det A = in=1 (−1)i + j ai j mi j = in=1 ai j c i j .
det In = 1.
If A is a triangular matrix, then det A = a11 a22 · · · ann .
4-1
4-2
Handbook of Linear Algebra
7. If B is a matrix obtained from A by interchanging two rows (or columns), then det B = − det A.
8. If B is a matrix obtained from A by multiplying one row (or column) by a nonzero constant r ,
then det B = r det A.
9. If B is a matrix obtained from A by adding to a row (or column) a multiple of another row (or
column), then det B = det A.
10. If A, B, and C differ only in the r th row (or column), and the r th row (or column) of C is the sum
of the r th rows (or columns) of A and B, then det C = det A + det B.
11. If A is a matrix with a row (or column) of zeros, then det A = 0.
12. If A is a matrix with two identical rows (or columns), then det A = 0.
13. Let B be a row echelon form of A obtained by Gaussian elimination, using k row interchange
operations and adding multiples of one row to another (see Algorithm 1 in Section 1.3). Then
det A = (−1)k det B = (−1)k b11 b22 · · · bnn .
14. det AT = det A.
15. If A ∈ Cn×n , then det A∗ = det A.
16. det AB = det A det B.
17. If c ∈ F , then det(c A) = c n det A.
if and only if det A = 0.
18. A is nonsingular, that is A−1
exists,
19. If A is nonsingular, then det
A−1 = det1 A .
20. If S is nonsingular, then det S −1 AS = det A.
21. [HJ85] det A = σ sgnσ a1σ (1) a2σ (2) · · · anσ (n) , where summation is over the n! permutations, σ ,
of the n indices {1, 2, . . . , n}. The weight “sgnσ ” is 1 when σ is even and −1 when σ is odd. (See
Preliminaries for more information on permutations.)
22. If x, y ∈ F n , then det(I + xyT ) = 1 + yT x.
23. [FIS89] Let T be a linear operator on a finite dimensional vector space V . Let B and B be bases
for V . Then det(T ) = det ([T ]B ) = det ([T ]B ).
24. [FIS89] Let T be a linear operator on a finite dimensional vector space V . Then T is invertible if
and only if det(T ) = 0.
25. [FIS89] Let T be an invertible linear operator on a finite dimensional vector space V . Then
1
.
det(T −1 ) = det(T
)
26. [FIS89] Let T and U be linear operators on a finite dimensional vector space V . Then det(TU ) =
det(T ) · det(U ).
Examples:
⎡
3 −2
⎢
1. Let A = ⎣ 2
5
−3
1
2 · det
⎤
4
⎥
−6⎦. Expanding the determinant of A along the second column: det A =
5
2 −6
3
+ 5 · det
−3
5
−3
⎡
−1
3 −2
5
8
−4
0
0
3
1
⎢
⎢ 2
2. Let A = ⎢
⎣ 7
⎡
⎤
4
3
− det
5
2
⎤
4
= 2 · (−8) + 5 · 27 + 26 = 145.
−6
4
1⎥
⎥
⎥. Expanding the determinant of A along the third row: det A =
−6⎦
5
⎡
⎤
⎡
3 −2 4
−1 −2 4
−1
⎢
⎥
⎢
⎥
⎢
7 · det ⎣5
8 1⎦ + 4 · det ⎣ 2
8 1⎦ + 6 · det ⎣ 2
3
1 5
0
1 5
0
3. Let T : R2 → R2 defined by T
det
x1
x2
2 −3
= 15. Now let B =
1
6
=
3
5
3
2x1 − 3x2
. With B =
x1 + 6x2
1
1
,
1
0
⎤
−2
⎥
8⎦ = 557.
1
1
0
,
0
1
. Then det ([T ]B ) = det
7
−8
, then det ([T ]B ) =
1
= 15.
1
4-3
Determinants and Eigenvalues
Applications:
1. (Cramer’s Rule) If A ∈ F n×n is nonsingular, then the equation Ax = b, where x, b ∈ F n , has the
⎡ ⎤
s1
⎢s ⎥
⎢ 2⎥
⎥
unique solution s = ⎢
⎢ .. ⎥, where s i =
⎣.⎦
det Ai
det A
and Ai is the matrix obtained from A by replacing
sn
the i th column with b.
⎡
⎤
1 ··· 1
⎢
x2 · · · xn ⎥
⎢
⎥
⎢
x22 · · · xn2 ⎥
⎥=
2. [Mey00, p. 486] (Vandermonde Determinant) det ⎢
1≤i < j ≤n (xi − x j ).
⎢
⎥
.
.
..
⎢
⎥
.
.
⎣
. . ⎦
.
x1n−1 x2n−1 · · · xnn−1
3. [FB90, pp. 220–235] (Volume) Let a1 , a2 , . . . , an be linearly independent vectors in Rm . The volume,
V , of the n-dimensional
solid in Rm , defined by S = { in=1 ti ai , 0 ≤ ti ≤ 1, i = 1, 2, . . . , n}, is
1
x1
x12
..
.
given by V = det AT A , where A is the matrix whose i th column is the vector ai .
Let m ≥ n and T : Rn → Rm be a linear transformation whose standard matrix representation is
the m × n matrix A. Let S be a region
in Rn of volume VS . Then the volume of the image of S
under the transformation T is VT (S) = det AT A · VS .
4. [Uhl02, pp. 247–248] (Wronskian) Let f 1 , f 2 , . . . , f n be n − 1 times differentiable functions of the
real variable x. The determinant
⎡
f 1 (x)
⎢ f (x)
⎢ 1
W( f 1 , f 2 , . . . , f n )(x) = det ⎢
..
⎢
⎣
.
(n−1)
f1
(x)
f 2 (x)
···
f 2 (x)
···
..
..
.
.
(n−1)
f2
(x) · · ·
f n (x)
f n (x)
..
.
(n−1)
fn
(x)
⎤
⎥
⎥
⎥
⎥
⎦
is called the Wronskian of f 1 , f 2 , . . . , f n . If W( f 1 , f 2 , . . . , f n )(x) = 0 for some x ∈ R, then the
functions f 1 , f 2 , . . . , f n are linearly independent.
4.2
Determinants: Advanced Results
Definitions:
A principal minor is the determinant of a principal submatrix. (See Section 1.2.)
A leading principal minor is the determinant of a leading principal submatrix.
The sum of all the k × k principal minors of A is denoted
Sk (A).
m
n
×
matrix C k (A) whose entries are the k × k
The kth compound matrix of A ∈ F m×n is the
k
k
minors of A, usually in lexicographical order.
T
The adjugate of A ∈ F n×n is the matrix adj A = c j i = c i j , where c i j is the i j th-cofactor.
n
n
×
matrix adj (k) A, whose a j i entry is the cofactor,
k
k
in A, of the (n − k)th minor, of A, in the i j th position of the compound.
Let α ⊆ {1, 2, . . . , n} and A ∈ F n×n with A[α] nonsingular. The matrix
The kth adjugate of A ∈ F n×n is the
A/A[α] = A[α c ] − A[α c , α]A[α]−1 A[α, α c ]
is called the Schur complement of A[α].
4-4
Handbook of Linear Algebra
Facts:
All matrices are assumed to be in F n×n , unless otherwise stated. All the following facts except those with
a specific reference can be found in [Lay03, pp. 185–213] or [Goo03, pp. 167–193].
1. A (adj A) = (adj A) A = (det A) In .
2. det (adj A) = (det A)n−1 .
3. If det A =
0, then adj A is nonsingular, and (adj A)−1 = (det A)−1 A.
⎡
a11
⎢a
⎢ 21
⎢a
31
4. [Ait56] (Method of Condensation) Let A = ⎢
⎢ .
⎢ .
⎣ .
an1
a12
a22
a32
..
.
an2
a13
a23
a33
..
.
an3
⎤
· · · a1n
· · · a2n ⎥
⎥
· · · a3n ⎥
⎥, and assume without
.. ⎥
..
⎥
.
. ⎦
· · · ann
loss of generality that a11 = 0, otherwise a nonzero element can be brought to the (1, 1) position by
interchanging two rows, which will change the sign of the determinant. Multiply all the rows of A
except the first by a11 . For i = 2, 3, . . . , n, perform the row operations: replace row i with row i −ai 1 ·
⎡
a11
⎢ 0
⎢
⎢ 0
n−1
det A = det ⎢
row 1. Thus a11
⎢ .
⎢ .
⎣ .
0
⎡
a11
⎢ det
a21
⎢
So, det A =
1
n−2
a11
⎢
⎢
⎢
⎢ det a 11
⎢
a31
· det ⎢
⎢
..
⎢
⎢
.
⎢
⎢
⎢
a11
⎣
det
an1
a12
a22
a12
a32
a12
an2
a12
a11 a22 − a21 a12
a11 a32 − a31 a12
..
.
a11 an2 − an1 a12
a
det 11
a21
a13
a23
a11
a31
..
.
a13
a33
a
det 11
an1
a12
an3
det
a13
a11 a23 − a21 a13
a11 a33 − a31 a13
..
.
a11 an3 − an1 a12
···
a
det 11
a21
a1n
a2n
a11
a31
..
.
a1n
a32
a
det 11
an1
a1n
ann
···
..
det
.
···
⎤
···
a1n
· · · a11 a2n − a21 a1n ⎥
⎥
· · · a11 a3n − a31 a1n ⎥
⎥.
⎥
..
..
⎥
⎦
.
.
· · · a11 ann − an1 a1n
⎤
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥.
⎥
⎥
⎥
⎥
⎥
⎥
⎦
5. [Ait56] A(k) (adj (k) A) = (adj (k) A)A(k) = (det A)In .
(k) n−1
6. [Ait56] det A
= det (Ar ), where r =
.
k−1
7. [Ait56] det A(n−k) = det(adj (k) A).
8. [HJ85] If A ∈ F n×n , B ∈ F m×m , then det (A B) = (det A)m (det B)n . (See Section 10.5 for the
definition of A B.)
9. [Uhl02] For A ∈ F n×n , det A is the unique normalized, alternating, and multilinear function
d : F n×n → F . That is, d(In ) = 1, d(A) = −d(A ), where A denotes the matrix obtained from
A, by interchanging two rows, and d is linear in each row of A, if the remaining rows of A are held
fixed.
10. [HJ85] (Cauchy–Binet) Let A ∈ F n×k , B ∈ F k×n , and C = AB. Then
det C [α, β] =
det A[α, γ ] det B[γ , β],
γ
where α ⊆ {1, 2, . . . , m}, β ⊆ {1, 2, . . . , n}, with |α| = |β| = r , 1 ≤ r ≤ min{m, k, n}, and the
sum is taken over all sets γ ⊆ {1, 2, . . . , k} with |γ | = r .
11. [HJ85] (Schur Complement). Let A[α] be nonsingular. Then
det A = det A[α] det A[α c ] − A [α c , α] A[α]−1 A [α, α c ] .
4-5
Determinants and Eigenvalues
12. [HJ85] (Jacobi’s Theorem) Let A be nonsingular and let α, β ⊆ {1, 2, . . . , n}, with |α| = |β|. Then
det A−1 [α c , β c ] = (−1)(
i ∈α
i+
j ∈β
A[β, α]
.
det A
j ) det
In particular, if α = β. Then det A−1 [α c ] = detdetA[α]
.
A
13. [HJ85] (Sylvester’s Identity) Let α ⊆ {1, 2, . . . , n} with |α| = k, and i, j ∈ {1, 2, . . . , n}, with i, j ∈
/
α. For A ∈ F n×n , let B = [bi j ] ∈ F (n−k)×(n−k) be defined by bi j = det A [α ∪ {i } , α ∪ { j }]. Then
det B = (det A[α])n−k−1 det A.
Examples:
⎡
1
⎢
⎢0
1. Let A = ⎢
⎣0
0
1
1
0
−1
⎤
⎡
0
1
⎥
1⎥
⎢
⎥. S3 (A) = 23 because det A[{1, 2, 3}] = det⎣0
2⎦
0
−4
0
0
−3
−2
⎡
1
⎢
det A[{1, 2, 4}] = det ⎣0
0
⎤
1
1
−1
⎡
⎡
0
1
0
⎥
⎢
1⎦ = −3, det A[{1, 3, 4}] = det⎣0 −3
0 −2
−4
⎤
1
0
⎢
det A[{2, 3, 4}] = det ⎣ 0 −3
−1 −2
⎤
1
1
0
0
⎥
0⎦ = −3,
−3
⎤
0
⎥
2⎦ = 16, and
−4
1
⎥
2⎦ = 13. From the Laplace expansion on the first column
−4
and det A[{2, 3, 4}] = 13, it follows that S4 (A) = det A = 13. Clearly, S1 (A) = tr A = −5.
⎡
1
⎢
⎢1
2. (kth compound) Let A = ⎢
⎣1
1
and det (C 2 (A)) = 729.
⎡
−1
⎢
⎢ 2
3. (Cauchy–Binet) Let A = ⎢
⎣ i
0
1
0
1
4
1
1
0
0
⎡
−1
⎢ 0
1
⎢
⎢
4⎥
⎥
⎢ 3
⎥. Then det A = 9, C 2 (A) = ⎢
⎢ 1
1⎦
⎢
⎣ 4
1
3
⎤
0
3
−1
0
−1
0
−1 −3
−1 −3
0
0
1
4
−1
0
−4 −3
−1 −4
−4 −16
0 −3
⎤
⎡
⎤
3 −1
3
2 −3 0
0
0⎥
⎢
⎥
⎥
4 3i ⎦, and C = AB.
⎥, B = ⎣0 −4
−4
0⎦
7 −6i
5 4
1+i
1
Then det C [{2, 4}, {2, 3}] =
det A[{2, 4}, {1, 2}] det B[{1, 2}, {2, 3}] +
det A[{2, 4}, {1, 3}] det B[{1, 3}, {2, 3}] +
det A[{2, 4}, {2, 3}] det B[{2, 3}, {2, 3}] = 12 − 44i .
a
4. (Schur Complement) Let A =
b
b∗
, where a ∈ C, b ∈ Cn−1 , and C ∈ C(n−1)×(n−1) . If C is
C
nonsingular, then det A = (a − b∗ C −1 b) det C . If a = 0, then det A = a det C − a1 bb∗ .
⎡
⎤
1 −2 0
⎢
⎥
5. (Jacobi’s Theorem) Let A = ⎣ 3
4 0⎦ and α = {2} = β. By Jacobi’s formula, det A−1 (2) =
−1
0 5
⎡
det A[2]
det A
=
−1
4
50
=
2
. This can be readily verified by computing
25
det A [{1, 3}] =
2
.
25
A−1 =
2
⎢ 35
⎢−
⎣ 10
2
25
1
5
1
10
1
25
0
⎤
⎥
0⎥
⎦, and verifying
1
5
⎤
3
1⎥
⎥
1⎥
⎥
⎥,
1⎥
⎥
1⎦
0
4-6
Handbook of Linear Algebra
⎡
−7
i
⎢
6. (Sylvester’s Identity) Let A = ⎣ −i
−2
−3 1 − 4i
⎤
−3
⎥
1 + 4i ⎦ and α = {1}. Define B ∈ C2×2 , with entries
5
b11 = det A[{1, 2}] = 13, b12 = det A[{1, 2}, {1, 3}] = −7 − 31i , b21 = det A[{1, 3}, {1, 2}] =
−7 + 31i , b22 = det A[{1, 3}] = −44. Then −1582 = det B = (det A[{1}]) det A = (−7) det A,
so det A = 226.
4.3
Eigenvalues and Eigenvectors
Definitions:
An element λ ∈ F is an eigenvalue of a matrix A ∈ F n×n if there exists a nonzero vector x ∈ F n such that
Ax = λx. The vector x is said to be an eigenvector of A corresponding to the eigenvalue λ. A nonzero row
vector y is a left eigenvector of A, corresponding to the eigenvalue λ, if yA = λy.
For A ∈ F n×n , the characteristic polynomial of A is given by p A (x) = det(x I − A).
The algebraic multiplicity, α(λ), of λ ∈ σ (A) is the number of times the eigenvalue occurs as a root in
the characteristic polynomial of A.
The spectrum of A ∈ F n×n , σ (A), is the multiset of all eigenvalues of A, with eigenvalue λ appearing
α(λ) times in σ (A).
The spectral radius of A ∈ Cn×n is ρ(A) = max{|λ| : λ ∈ σ (A)}.
Let p(x) = c n x n + c n−1 x n−1 + · · · + c 2 x 2 + c 1 x + c 0 be a polynomial with coefficients in F . Then
p(A) = c n An + c n−1 An−1 + · · · + c 2 A2 + c 1 A + c 0 I .
For A ∈ F n×n , the minimal polynomial of A, q A (x), is the unique monic polynomial of least degree
for which q A (A) = 0.
The vector space ker(A − λI ), for λ ∈ σ (A), is the eigenspace of A ∈ F n×n corresponding to λ, and is
denoted by E λ (A).
The geometric multiplicity, γ (λ), of an eigenvalue λ is the dimension of the eigenspace E λ (A).
An eigenvalue λ is simple if α(λ) = 1.
An eigenvalue λ is semisimple if α(λ) = γ (λ).
For K = C or any other algebraically closed field, a matrix A ∈ K n×n is nonderogatory if γ (λ) = 1 for
all λ ∈ σ (A), otherwise A is derogatory. Over an arbitrary field F , a matrix is nonderogatory (derogatory)
if it is nonderogatory (derogatory) over the algebraic closure of F .
For K = C or any other algebraically closed field, a matrix A ∈ K n×n is nondefective if every eigenvalue
of A is semisimple, otherwise A is defective. Over an arbitrary field F , a matrix is nondefective (defective)
if it is nondefective (defective) over the algebraic closure of F .
A matrix A ∈ F n×n is diagonalizable if there exists a nonsingular matrix B ∈ F n×n , such that
A = B D B −1 for some diagonal matrix D ∈ F n×n .
For a monic polynomial
p(x) = x n + c n−1 x n−1⎤+ · · · + c 2 x 2 + c 1 x + c 0 with coefficients in F , the
⎡
0 0 0 · · · 0 −c 0
⎢1 0 0 · · · 0
−c 1 ⎥
⎢
⎥
⎢0 1 0 · · · 0
−c 2 ⎥
⎥ is called the companion matrix of p(x).
n × n matrix C ( p) = ⎢
⎢. . .
.. ⎥
..
⎢. . . ..
⎥
⎣. . .
. .
. ⎦
0 0 0 · · · 1 −c n−1
Let T be a linear operator on a finite dimensional vector space, V , over a field F . An element λ ∈ F is
an eigenvalue of T if there exists a nonzero vector v ∈ V such that T (v) = λv. The vector v is said to be
an eigenvector of T corresponding to the eigenvalue λ.
For a linear operator, T , on a finite dimensional vector space, V , with a basis, B, the characteristic
polynomial of T is given by p T (x) = det([T ])B .
A linear operator T on a finite dimensional vector space, V , is diagonalizable if there exists a basis, B,
for V such that [T ]B is diagonalizable.
Determinants and Eigenvalues
4-7
Facts:
These facts are grouped into the following categories: Eigenvalues and Eigenvectors, Diagonalization,
Polynomials, Other Facts. All matrices are assumed to be in F n×n unless otherwise stated. All the following
facts, except those with a specific reference, can be found in [Mey00, pp. 489–660] or [Lay03, pp. 301–342].
Eigenvalues and Eigenvectors
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
λ ∈ σ (A) if and only if p A (λ) = 0.
For each eigenvalue λ of a matrix A, 1 ≤ γ (λ) ≤ α(λ).
A simple eigenvalue is semisimple.
For any F , |σ (A)| ≤ n. If F = C or any algebraically closed field, then |σ (A)| = n.
If F = C or any algebraically closed field, then det A = in=1 λi , λi ∈ σ (A).
n
If F = C or any algebraically closed field, then tr A = i =1 λi , λi ∈ σ (A).
For A ∈ Cn×n , λ ∈ σ (A) if and only if λ̄ ∈ σ (A∗ ).
For A ∈ Rn×n , viewing A ∈ Cn×n , λ ∈ σ (A) if and only if λ̄ ∈ σ (A).
If A ∈ Cn×n is Hermitian (e.g., A ∈ Rn×n is symmetric), then A has real eigenvalues and A can be
diagonalized. (See also Section 7.2.)
A and AT have the same eigenvalues with same algebraic multiplicities.
If A = [ai j ] is triangular, then σ (A) = {a11 , a22 , . . . , ann }.
If A has all row (column) sums equal to r , then r is an eigenvalue of A.
A is singular if and only if det A = 0, if and only if 0 ∈ σ (A).
If A is nonsingular and λ is an eigenvalue of A of algebraic multiplicity α(λ), with corresponding
eigenvector x, then λ−1 is an eigenvalue of A−1 with algebraic multiplicity α(λ) and corresponding
eigenvector x.
Let λ1 , λ2 , . . . , λs be distinct eigenvalues of A. For each i = 1, 2, . . . , s let xi 1 , xi 2 , . . . , xir i be
linearly independent eigenvectors corresponding to λi . Then the vectors x11 , . . . , x1r 1 , x21 , . . . , x2r 2 ,
. . . , xs 1 , . . . , xs r s are linearly independent.
[FIS89] Let T be a a linear operator on a finite dimensional vector space over a field F , with basis
B. Then λ ∈ F is an eigenvalue of T if and only if λ is an eigenvalue of [T ]B .
[FIS89] Let λ1 , λ2 , . . . , λs be distinct eigenvalues of the linear operator T , on a finite dimensional
space V . For each i = 1, 2, . . . , s let xi 1 , xi 2 , . . . , xir i be linearly independent eigenvectors corresponding to λi . Then the vectors x11 , . . . , x1r 1 , x21 , . . . , x2r 2 , . . . , xs 1 , . . . , xs r s are linearly independent.
Let T be linear operator on a finite dimensional vector space V over a field F . Then λ ∈ F is an
eigenvalue of T if and only if p T (λ) = 0.
Diagonalization
19. [Lew91, pp. 135–136] Let λ1 , λ2 , . . . , λs be distinct eigenvalues of A. If A ∈ Cn×n , then A is
diagonalizable if and only if α(λi ) = γ (λi ) for i = 1, 2, . . . , s . If A ∈ Rn×n , then A is diagonalizable
by a nonsingular matrix B ∈ Rn×n if and only if all the eigenvalues of A are real and α(λi ) = γ (λi )
for i = 1, 2, . . . , s .
20. Method for Diagonalization of A over C: This is a theoretical method using exact arithmetic and
is undesirable in decimal arithmetic with rounding errors. See Chapter 43 for information on
appropriate numerical methods.
r Find the eigenvalues of A.
r Find a basis x , . . . , x for E (A) for each of the distinct eigenvalues λ , . . . , λ of A.
i1
ir i
λi
1
k
r If r +· · ·+r = n, then let B = [x . . . x . . .x . . .x ]. B is invertible and D = B −1 AB is a
1
k
11
1r 1
k1
kr k
diagonal matrix, whose diagonal entries are the eigenvalues of A, in the order that corresponds
to the order of the columns of B. Else A is not diagonalizable.
21. A is diagonalizable if and only if A has n linearly independent eigenvectors.
4-8
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
Handbook of Linear Algebra
A is diagonalizable if and only if |σ (A)| = n and A is nondefective.
If A has n distinct eigenvalues, then A is diagonalizable.
A is diagonalizable if and only if q A (x) can be factored into distinct linear factors.
If A is diagonalizable, then so are AT , Ak , k ∈ N.
If A is nonsingular and diagonalizable, then A−1 is diagonalizable.
If A is an idempotent, then A is diagonalizable and σ (A) ⊆ {0, 1}.
If A is nilpotent, then σ (A) = {0}. If A is nilpotent and is not the zero matrix, then A is not
diagonalizable.
[FIS89] Let T be a linear operator on a finite dimensional vector space V with a basis B. Then T is
diagonalizable if and only if [T ]B is diagonalizable.
[FIS89] A linear operator, T , on a finite dimensional vector space V is diagonalizable if and only
if there exists a basis B = {v1 , . . . , vn } for V , and scalars λ1 , . . . , λn , such that T (vi ) = λi vi , for
1 ≤ i ≤ n.
[FIS89] If a linear operator, T , on a vector space, V , of dimension n, has n distinct eigenvalues,
then it is diagonalizable.
[FIS89] The characteristic polynomial of a diagonalizable linear operator on a finite dimensional
vector space can be factored into linear terms.
Polynomials
33. [HJ85] (Cayley–Hamilton Theorem) Let p A (x) = x n + an−1 x n−1 + · · · + a1 x + a0 be the characteristic polynomial of A. Then p A (A) = An + an−1 An−1 + · · · + a1 A + a0 In = 0.
34. [FIS89] (Cayley–Hamilton Theorem for a Linear Operator) Let p T (x) = x n + an−1 x n−1 + · · · +
a1 x + a0 be the characteristic polynomial of a linear operator, T , on a finite dimensional vector
space, V . Then p T (T ) = T n + an−1 T n−1 + · · · + a1 T + a0 In = T0 , where T0 is the zero linear
operator on V .
35. p AT (x) = p A (x).
36. The minimal polynomial q A (x) of a matrix A is a factor of the characteristic polynomial p A (x) of
A.
37. If λ is an eigenvalue of A associated with the eigenvector x, then p(λ) is an eigenvalue of the matrix
p(A) associated with the eigenvector x, where p(x) is a polynomial with coefficients in F .
38. If B is nonsingular, p A (x) = p B −1 AB (x), therefore, A and B −1 AB have the same eigenvalues.
39. Let p A (x) = x n + an−1 x n−1 + · · · + a1 x + a0 be the characteristic polynomial of A. If |σ (A)| = n,
then ak = (−1)n−k Sn−k (λ1 , . . . , λn ), k = 0, 1, . . . , n − 1, where Sk (λ1 , . . . , λn ) is the kth symmetric
function of the eigenvalues of A.
40. Let p A (x) = x n + an−1 x n−1 + · · · + a1 x + a0 be the characteristic polynomial of A. Then ak =
(−1)n−k Sn−k (A), k = 0, 1, . . . , n − 1.
41. If |σ (A)| = n, then Sk (A) = Sk (λ1 , . . . , λn ).
42. If C ( p) is the companion matrix of the polynomial p(x), then p(x) = pC ( p) (x) = q C ( p) (x).
43. [HJ85, p. 135] If |σ (A)| = n, A is nonderogatory, and B commutes with A, then there exists a
polynomial f (x) of degree less than n such that B = f (A).
Other Facts:
44. If A is nonsingular and λ is an eigenvalue of A of algebraic multiplicity α(λ), with corresponding eigenvector x, then det( A)λ−1 is an eigenvalue of adj A with algebraic multiplicity α(λ) and
corresponding eigenvector x.
45. [Lew91] If λ ∈ σ (A), then any nonzero column of adj ( A−λI ) is an eigenvector of A corresponding
to λ.
46. If AB = B A, then A and B have a common eigenvector.
47. If A ∈ F m×n and B ∈ F n×m , then σ (AB) = σ (B A) except for the zero eigenvalues.
4-9
Determinants and Eigenvalues
48. If A ∈ F m×m and B ∈ F n×n , λ ∈ σ (A), µ ∈ σ (B), with corresponding eigenvectors u and v,
respectively, then λµ ∈ σ (A B), with corresponding eigenvector u v. (See Section 10.5 for
the definition of A B.)
Examples:
0 −1
1. Let A =
. Then, viewing A ∈ Cn×n , σ (A) = {−i, i }. That is, A has no eigenvalues over
1
0
the reals. ⎡
⎤
−3
⎢
2. Let A = ⎣ 6
72
7 −1
⎥
8 −2⎦. Then p A (x) = (x + 6)(x − 15)2 = q A (x), λ1 = −6, α(λ1 ) = 1,
−28 19
γ (λ1 ) = 1, λ2 = 15, α(λ2 ) = 2, γ (λ2 ) = 1. Also, a set of linearly independent eigenvectors is
⎧ ⎡ ⎤ ⎡ ⎤⎫
⎪
−1 ⎪
⎨ −1
⎬
⎢ ⎥ ⎢ ⎥
⎣−2⎦ , ⎣ 1⎦ . So, A is not diagonalizable.
⎪
⎪
⎩
4
4 ⎭
⎡
⎤
57
⎢
3. Let A = ⎣ −14
−140
−21
21
⎥
22 −7⎦. Then p A (x) = (x + 6)(x − 15)2 , q A (x) = (x + 6)(x − 15),
70 −55
λ1 = −6, α(λ1 ) = 1, γ (λ1 ) = 1, λ2 = 15, α(λ2 ) = 2, γ (λ2 ) = 2. Also, a set of linearly
⎧ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎫
⎪
1
−3 ⎪
⎨ −1
⎬
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
independent eigenvectors is ⎣ 0⎦ , ⎣2⎦ , ⎣ 1⎦ . So, A is diagonalizable.
⎪
⎪
⎩
2
0
10 ⎭
⎡
⎤
−5 + 4i
⎢
4. Let A = ⎣ 2 + 8i
20 − 4i
1
i
⎥
−4 2i ⎦. Then σ (A) = {−6, −3, 3i }. If B =
−4 −i
σ (B) = {−12, −9, −5 + 6i }.
⎡
⎤
⎡
1
9
A2 + 2A − 4I , then
⎤
−2
1 0
−2
1 0
⎢
⎥
⎢
⎥
5. Let A = ⎣ 0 −3i 0⎦ and B = ⎣ 3 −1 1⎦. B is nonsingular, so A and B −1 AB =
0
0 1
4
0 1
⎡
−1 + 3i
⎢
⎣ 5 + 6i
8 − 12i
1−i
−1 − 2i
−4 + 4i
6. Let A
=
⎤
i
⎥
1 + 2i ⎦ have the same eigenvalues, which are given in the diagonal of A.
1 − 4i
2 −1 0
0
3 1
⎡
and B
=
⎡
3
⎢
⎣2
1
⎤
0
⎥
1⎦. Then AB
0
=
4 −1
, σ (AB)
7
3
=
⎤
6 −3 0
√ √
√
⎢
⎥
1
1
(7
+
3
3i
),
(7
−
3
3i
)
,
B
A
=
4
1 1⎦, and σ (B A) = 12 (7 + 3 3i ), 12 (7 − 3 3i ), 0 .
⎣
2
2
2 −1 0
√
⎡
1
⎢
7. Let A = ⎣0
0
1
1
0
⎤
⎡
⎤
⎡
⎤
0
−2
7 0
−2
5 0
⎥
⎢
⎥
⎢
⎥
0⎦ and B = ⎣ 0 −2 0⎦. Then AB = B A = ⎣ 0 −2 0⎦. A and B
−3i
0
0 i
0
0 3
⎡ ⎤
1
⎢ ⎥
share the eigenvector x = ⎣0⎦, corresponding to the eigenvalues λ = 1 of A and µ = −2 of B.
0
4-10
Handbook of Linear Algebra
⎡
1
⎢
⎢0
8. Let A = ⎢
⎣0
0
1
1
0
−1
0
0
−3
−2
⎤
0
1⎥
⎥
⎥. Then p A (x) = x 4 +5x 3 +4x 2 −23x +13. S4 (A), S3 (A), and S1 (A)
2⎦
−4
were computed in Example 1 of Section 4.2, and it is straightforward to verify that S2 (A) = 4.
Comparing these values to the characteristic polynomial, S4 (A) = 13 = (−1)4 13, S3 (A) =
23 = (−1)3 (−23), S2 (A) = (−1)2 4, and S1 (A) = (−1)(5). It follows that S4 (λ1 , λ2 , λ3 , λ4 ) =
λ1 λ2 λ3 λ4 = 13, S3 (λ1 , λ2 , λ3 , λ4 ) = 23, S2 (λ1 , λ2 , λ3 , λ4 ) = 4, and S1 (λ1 , λ2 , λ3 , λ4 ) = λ1 + λ2 +
λ3 + λ4 = −5 (these values can also be verified with a computer algebra system or numerical
software).
⎡
⎤
0 0 −2
⎢
⎥
9. Let p(x) = x 3 − 7x 2 − 3x + 2, C = C ( p) = ⎣1 0
3⎦. Then pC (x) = x 3 − 7x 2 − 3x + 2 =
0 1
7
⎡
⎤
⎡
−2 −14 −104
0
⎢
⎥
⎢
p(x). Also, pC (C ) = −C 3 + 7C 2 + 3C − 2I = − ⎣ 3
19
142⎦ + 7 ⎣0
7
52
383
1
⎡
0
⎢
+ 3 ⎣1
0
0
0
1
⎤
⎡
−2
1
⎥
⎢
3⎦ − 2 ⎣ 0
7
0
0
1
0
⎤
⎡
0
0
⎥ ⎢
0⎦ = ⎣ 0
1
0
0
0
0
⎤
−2 −14
⎥
3
19⎦
7
52
⎤
0
⎥
0 ⎦.
0
Applications:
1. (Markov Chains) (See also Chapter 54 for more information.) A Markov Chain describes a
process in which a system can be in any one of n states: s 1 , s 2 , . . . , s n . The probability of entering state s i depends only on the state previously occupied by the system. The transition
probability of entering state j , given that the system is in state i , is denoted by pi j . The transition matrix is the matrix P = [ pi j ]; its rows have sum 1. A (row or column) vector is
a probability vector if its entries are nonnegative and sum to 1. The probabilty row vector
π (k) = (π1(k) , π2(k) , . . . πn(k) ), k ≥ 0, is called the state vector of the system at time k if its i th
entry is the probability that the system is in state s i at time k. In particular, when k = 0, the
state vector is called the initial state vector and its i th entry is the probability that the system
begins at state s i . It follows from probability theory that π (k+1) = π (k) P , and thus inductively
then P is said⎤to be
that π (k) = π (0) P k . If the entries of some power of P are all positive,
⎡
π1 π2 · · · πn
⎢π π · · · π ⎥
2
n⎥
⎢ 1
n
regular. If P is a regular transition matrix, then as n → ∞, P → ⎢
.. . .
.. ⎥
⎢ ..
⎥. The
.
⎣ .
.
. ⎦
π1 π2 · · · πn
row vector π = (π1 , π2 , . . . , πn ) is called the steady state vector, π is a probability vector, and
as n → ∞, π (n) → π. The vector π is the unique probability row vector with the property
that π P = π. That is, π is the unique probability row vector that is a left eigenvector of P for
eigenvalue 1.
2. (Differential
Equations) [Mey00, pp. 541–546] Consider the system of linear differential equations
⎧
x1 = a11 x1 + a12 x2 + . . . + a1n xn
⎪
⎪
⎪
⎪
⎨ x2 = a 21 x1 + a 22 x2 + . . . + a 2n xn
.. , where each of the unknowns x1 , x2 , . . . , xn
..
..
..
⎪
⎪
.
.
.
.
⎪
⎪
⎩ xn = an1 x1 + an2 x2 + . . . + ann xn
4-11
Determinants and Eigenvalues
is a differentiable function of the real variable t. This system of linear differential equations can
⎡
⎤
⎡
⎤
x1 (t)
x1 (t)
⎢ x (t) ⎥
⎢ x (t) ⎥
⎢ 2 ⎥
⎢ 2 ⎥
⎥
⎢
⎥
be written in matrix form as x = Ax, where A = [ai j ], x = ⎢
⎢ .. ⎥, and x = ⎢ .. ⎥. If A
⎣ . ⎦
⎣ . ⎦
xn (t)
xn (t)
is diagonalizable, there exists a nonsingular matrix B (the columns of the matrix B are linearly
independent eigenvectors of A), such that B −1 AB = D is a diagonal matrix, so x = B D B −1 x, or
B −1 x = D B −1 x. Let u = B −1 x. The linear system of differential equations u = Du has solution
⎡
⎤
k1 e λ1 t
⎢ k e λ2 t ⎥
⎢ 2
⎥
⎥
u = ⎢
⎢ .. ⎥, where λ1 , λ2 , . . . , λn are the eigenvalues of A. It follows that x = Bu. (See also
⎣ . ⎦
kn e λn t
Chapter 55.)
3. (Dynamical Systems) [Lay03, pp. 315–316] Consider the dynamical system given by uk+1 = Auk ,
⎡
⎤
a1
⎢a ⎥
⎢ 2⎥
⎥
where A = [ai j ], u0 = ⎢
⎢ .. ⎥. If A is diagonalizable, there exist n linearly independent eigenvectors,
⎣.⎦
an
x1 , x2 , . . . , xn , of A. The vector u0 can then be written as a linear combination of the eigenvectors,
that is, u0 = c 1 x1 + c 2 x2 + · · · + c n xn . Then u1 = Au0 = A (c 1 x1 + c 2 x2 + · · · + c n xn ) =
k+1
k+1
c 1 λ1 x1 + c 2 λ2 x2 + · · · + c n λn xn . Inductively, uk+1 = Auk = c 1 λk+1
1 x1 + c 2 λ2 x2 + · · · + c n λn xn .
Thus, the long-term behavior of the dynamical system can be studied using the eigenvalues of the
matrix A. (See also Chapter 56.)
References
[Ait56] A. C. Aitken. Determinants and Matrices, 9th ed. Oliver and Boyd, Edinburgh, 1956.
[FB90] J. B. Fraleigh and R. A. Beauregard. Linear Algebra, 2nd ed. Addison-Wesley, Reading, PA, 1990.
[FIS89] S. H. Friedberg, A. J. Insel, and L. E. Spence. Linear Algebra, 2nd ed. Prentice Hall, Upper Saddle
River, NJ, 1989.
[Goo03] E. G. Goodaire. Linear Algebra a Pure and Applied First Course. Prentice Hall, Upper Saddle River,
NJ, 2003.
[HJ85] R. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, Cambridge, 1985.
[Lay03] D. C. Lay. Linear Algebra and Its Applications, 3rd ed. Addison-Wesley, Boston, 2003.
[Lew91] D. W. Lewis. Matrix Theory. Word Scientific, Singapore, 1991.
[Mey00] C. D. Meyer. Matrix Analysis and Applied Linear Algebra. SIAM, Philadelphia, 2000.
[Uhl02] F. Uhlig. Transform Linear Algebra. Prentice Hall, Upper Saddle River, NJ, 2002.
Fly UP