...

Basic matrix algebra

by taratuta

on
Category: Documents
183

views

Report

Comments

Transcript

Basic matrix algebra
MATRICES AND VECTOR SPACES
In a similar way we may denote a vector
basis ei , i = 1, 2, . . . , N, by the array

x1
 x2

x= .
 ..
x in terms of its components xi in a



,

xN
which is a special case of (8.25) and is called a column matrix (or conventionally,
and slightly confusingly, a column vector or even just a vector – strictly speaking
the term ‘vector’ refers to the geometrical entity x). The column matrix x can also
be written as
···
x2
x = (x1
xN )T ,
which is the transpose of a row matrix (see section 8.6).
We note that in a different basis ei the vector x would be represented by a
different column matrix containing the components xi in the new basis, i.e.
 
x1
 x2 


x =  .  .
 .. 
xN
Thus, we use x and x to denote different column matrices which, in different bases
ei and ei , represent the same vector x. In many texts, however, this distinction is
not made and x (rather than x) is equated to the corresponding column matrix; if
we regard x as the geometrical entity, however, this can be misleading and so we
explicitly make the distinction. A similar argument follows for linear operators;
the same linear operator A is described in different bases by different matrices A
and A , containing different matrix elements.
8.4 Basic matrix algebra
The basic algebra of matrices may be deduced from the properties of the linear
operators that they represent. In a given basis the action of two linear operators
A and B on an arbitrary vector x (see the beginning of subsection 8.2.1), when
written in terms of components using (8.24), is given by
(A + B)ij xj =
Aij xj +
Bij xj ,
j
j
(λA)ij xj = λ
j
j
j
Aij xj ,
j
(AB)ij xj =
Aik (Bx)k =
j
k
250
k
Aik Bkj xj .
8.4 BASIC MATRIX ALGEBRA
Now, since x is arbitrary, we can immediately deduce the way in which matrices
are added or multiplied, i.e.
(A + B)ij = Aij + Bij ,
(8.26)
(λA)ij = λAij ,
(AB)ij =
Aik Bkj .
(8.27)
(8.28)
k
We note that a matrix element may, in general, be complex. We now discuss
matrix addition and multiplication in more detail.
8.4.1 Matrix addition and multiplication by a scalar
From (8.26) we see that the sum of two matrices, S = A + B, is the matrix whose
elements are given by
Sij = Aij + Bij
for every pair of subscripts i, j, with i = 1, 2, . . . , M and j = 1, 2, . . . , N. For
example, if A and B are 2 × 3 matrices then S = A + B is given by
S11
S21
S12
S22
S13
S23
=
=
A11
A21
A12
A22
A11 + B11
A21 + B21
A13
A23
+
A12 + B12
A22 + B22
B11
B21
B12
B22
A13 + B13
A23 + B23
B13
B23
.
(8.29)
Clearly, for the sum of two matrices to have any meaning, the matrices must have
the same dimensions, i.e. both be M × N matrices.
From definition (8.29) it follows that A + B = B + A and that the sum of a
number of matrices can be written unambiguously without bracketting, i.e. matrix
addition is commutative and associative.
The difference of two matrices is defined by direct analogy with addition. The
matrix D = A − B has elements
Dij = Aij − Bij ,
for i = 1, 2, . . . , M, j = 1, 2, . . . , N.
(8.30)
From (8.27) the product of a matrix A with a scalar λ is the matrix with
elements λAij , for example
λ
A11
A21
A12
A22
A13
A23
=
λ A11
λ A21
λ A12
λ A22
Multiplication by a scalar is distributive and associative.
251
λ A13
λ A23
.
(8.31)
MATRICES AND VECTOR SPACES
The matrices A, B and C are given by
2 −1
1
A=
,
B=
3
1
0
0
−2
,
C=
−2
−1
1
1
.
Find the matrix D = A + 2B − C.
D=
=
2
3
−1
1
+2
1
0
2 + 2 × 1 − (−2)
3 + 2 × 0 − (−1)
1
1
−1 + 2 × 0 − 1
6
=
1 + 2 × (−2) − 1
4
0
−2
−
−2
−1
−2
−4
.
From the above considerations we see that the set of all, in general complex,
M × N matrices (with fixed M and N) forms a linear vector space of dimension
MN. One basis for the space is the set of M × N matrices E(p,q) with the property
that Eij(p,q) = 1 if i = p and j = q whilst Eij(p,q) = 0 for all other values of i and
j, i.e. each matrix has only one non-zero entry, which equals unity. Here the pair
(p, q) is simply a label that picks out a particular one of the matrices E (p,q) , the
total number of which is MN.
8.4.2 Multiplication of matrices
Let us consider again the ‘transformation’ of one vector into another, y = A x,
which, from (8.24), may be described in terms of components with respect to a
particular basis as
yi =
N
Aij xj
for i = 1, 2, . . . , M.
(8.32)
j=1
Writing this in matrix form as y = Ax we have






y1
y2
..
.
yM


 
 
=
 

A11
A21
..
.
A12
A22
..
.
...
...
..
.
A1N
A2N
..
.
AM1
AM2
...
AMN

x1


  x2 


 . 
  .. 




(8.33)
xN
where we have highlighted with boxes the components used to calculate the
element y2 : using (8.32) for i = 2,
y2 = A21 x1 + A22 x2 + · · · + A2N xN .
All the other components yi are calculated similarly.
If instead we operate with A on a basis vector ej having all components zero
252
8.4 BASIC MATRIX ALGEBRA
except for the jth, which equals unity, then we find

0

 0

A11 A12 . . . A1N

 A21 A22 . . . A2N   ..

 .
Aej =  .
..
..  
..
1
 ..
.
.
. 

 ..
AM1 AM2 . . . AMN
 .
0

 

A1j

  A2j
 
= .
  ..


AMj




,

and so confirm our identification of the matrix element Aij as the ith component
of Aej in this basis.
From (8.28) we can extend our discussion to the product of two matrices
P = AB, where P is the matrix of the quantities formed by the operation of
the rows of A on the columns of B, treating each column of B in turn as the
vector x represented in component form in (8.32). It is clear that, for this to be
a meaningful definition, the number of columns in A must equal the number of
rows in B. Thus the product AB of an M × N matrix A with an N × R matrix B
is itself an M × R matrix P, where
Pij =
N
Aik Bkj
for i = 1, 2, . . . , M,
j = 1, 2, . . . , R.
k=1
For example, P = AB may be written in matrix form
P11
P21
P12
P22
=
A11
A21
A12
A22
A13
A23

B11
 B21
B31

B12
B22 
B32
where
P11 = A11 B11 + A12 B21 + A13 B31 ,
P21 = A21 B11 + A22 B21 + A23 B31 ,
P12 = A11 B12 + A12 B22 + A13 B32 ,
P22 = A21 B12 + A22 B22 + A23 B32 .
Multiplication of more than two matrices follows naturally and is associative.
So, for example,
A(BC) ≡ (AB)C,
(8.34)
provided, of course, that all the products are defined.
As mentioned above, if A is an M × N matrix and B is an N × M matrix then
two product matrices are possible, i.e.
P = AB
and
253
Q = BA.
MATRICES AND VECTOR SPACES
These are clearly not the same, since P is an M × M matrix whilst Q is an
N × N matrix. Thus, particular care must be taken to write matrix products in
the intended order; P = AB but Q = BA. We note in passing that A2 means AA,
A3 means A(AA) = (AA)A etc. Even if both A and B are square, in general
AB = BA,
(8.35)
i.e. the multiplication of matrices is not, in general, commutative.
Evaluate P = AB and Q = BA where


3
2
−1
3
2 ,
A= 0
1 −3
4

2
B= 1
3
−2
1
2

3
0 .
1
As we saw for the 2 × 2 case above, the element Pij of the matrix P = AB is found by
mentally taking the ‘scalar product’ of the ith row of A with the jth column of B. For
example, P11 = 3 × 2 + 2 × 1 + (−1) × 3 = 5, P12 = 3 × (−2) + 2 × 1 + (−1) × 2 = −6, etc.
Thus



 
3
2
−1
5 −6 8
2 −2 3





0
3
2
9
7
2 ,
1
1
0
P = AB =
=
11
3
7
1 −3
4
3
2
1
and, similarly,

2
Q = BA =  1
3
−2
1
2

3
3
0  0
1
1
2
3
−3
 
−1
9
2 = 3
10
4
−11
5
9

6
1 .
5
These results illustrate that, in general, two matrices do not commute. The property that matrix multiplication is distributive over addition, i.e. that
(A + B)C = AC + BC
(8.36)
C(A + B) = CA + CB,
(8.37)
and
follows directly from its definition.
8.4.3 The null and identity matrices
Both the null matrix and the identity matrix are frequently encountered, and we
take this opportunity to introduce them briefly, leaving their uses until later. The
null or zero matrix 0 has all elements equal to zero, and so its properties are
A0 = 0 = 0A,
A + 0 = 0 + A = A.
254
Fly UP