...

Simultaneous linear equations

by taratuta

on
Category: Documents
64

views

Report

Comments

Transcript

Simultaneous linear equations
NUMERICAL METHODS
n
1
2
3
4
5
6
xn+1
8.5
5.191
4.137
4.002 257
4.000 000 637
4
n
4.5
1.19
1.4 × 10−1
2.3 × 10−3
6.4 × 10−7
—
Table 27.5 Successive approximations to
(27.20).
√
16 using the iteration scheme
The following is an iteration scheme for finding the square root of X:
1
X
xn +
xn+1 =
.
2
xn
(27.20)
Show that it has second-order convergence
and illustrate its efficiency by finding, say,
√
starting with a very poor guess, 16 = 1.
√
16
If this scheme does converge to ξ then ξ will satisfy
X
1
ξ+
⇒ ξ 2 = X,
ξ=
2
ξ
as required. The iteration function F is given by
X
1
x+
,
F(x) =
2
x
and so, since ξ 2 = X,
F (ξ) =
whilst
1
2
F (ξ) =
1−
X
x3
X
x2
= 0,
x=ξ
=
x=ξ
1
= 0.
ξ
Thus the procedure has second-order, but not third-order, convergence.
We now show the procedure in action. Table 27.5 gives successive values of xn and of
n , the difference between xn and the true value, 4. As we can see, the scheme is crude
initially, but once xn gets close to ξ, it homes in on the true value extremely rapidly. 27.3 Simultaneous linear equations
As we saw in chapter 8, many situations in physical science can be described
approximately or exactly by a set of N simultaneous linear equations in N
994
27.3 SIMULTANEOUS LINEAR EQUATIONS
variables (unknowns), xi , i = 1, 2, . . . , N. The equations take the general form
A11 x1 + A12 x2 + · · · + A1N xN = b1 ,
A21 x1 + A22 x2 + · · · + A2N xN = b2 ,
(27.21)
..
.
AN1 x1 + AN2 x2 + · · · + ANN xN = bN ,
where the Aij are constants and form the elements of a square matrix A. The bi
are given and form a column matrix b. If A is non-singular then (27.21) can be
solved for the xi using the inverse of A, according to the formula
x = A−1 b.
This approach was discussed at length in chapter 8 and will not be considered
further here.
27.3.1 Gaussian elimination
We follow instead a continuation of one of the earliest techniques acquired by a
student of algebra, namely the solving of simultaneous equations (initially only
two in number) by the successive elimination of all the variables but one. This
(known as Gaussian elimination) is achieved by using, at each stage, one of the
equations to obtain an explicit expression for one of the remaining xi in terms
of the others and then substituting for that xi in all other remaining equations.
Eventually a single linear equation in just one of the unknowns is obtained. This
is then solved and the result is resubstituted in previously derived equations (in
reverse order) to establish values for all the xi .
This method is probably very familiar to the reader, and so a specific example
to illustrate this alone seems unnecessary. Instead, we will show how a calculation
along such lines might be arranged so that the errors due to the inherent lack of
precision in any calculating equipment do not become excessive. This can happen
if the value of N is large and particularly (and we will merely state this) if the
elements A11 , A22 , . . . , ANN on the leading diagonal of the matrix in (27.21) are
small compared with the off-diagonal elements.
The process to be described is known as Gaussian elimination with interchange.
The only, but essential, difference from straightforward elimination is that before
each variable xi is eliminated, the equations are reordered to put the largest (in
modulus) remaining coefficient of xi on the leading diagonal.
We will take as an illustration a straightforward three-variable example, which
can in fact be solved perfectly well without any interchange since, with simple
numbers and only two eliminations to perform, rounding errors do not have
a chance to build up. However, the important thing is that the reader should
995
NUMERICAL METHODS
appreciate how this would apply in (say) a computer program for a 1000-variable
case, perhaps with unforseeable zeros or very small numbers appearing on the
leading diagonal.
Solve the simultaneous equations
(a)
(b)
(c)
x1
3x1
−x1
+6x2
−20x2
+3x2
−4x3
+x3
+5x3
= 8,
= 12,
= 3.
(27.22)
Firstly, we interchange rows (a) and (b) to bring the term 3x1 onto the leading diagonal. In
the following, we label the important equations (I), (II), (III), and the others alphabetically.
A general (i.e. variable) label will be denoted by j.
(I)
(d)
(e)
−20x2
+6x2
+3x2
3x1
x1
−x1
+x3
−4x3
+5x3
= 12,
= 8,
= 3.
For (j) = (d) and (e), replace row (j) by
aj1
× row (I),
3
is the coefficient of x1 in row (j), to give the two equations
,
x2 + −4 − 13 x3 = 8 − 12
(II)
6 + 20
3
3
row (j) −
where aj1
(f)
| > |3 − 20
| and
Now |6 + 20
3
3
eliminate x2 , replace row (f)
3−
20
3
+ 5 + 13 x3
x2
= 3+
12
.
3
so no interchange is required before the next elimination. To
by
11 −
row (f) − 383 × row (II).
3
This gives
16
(III)
3
+
11
38
×
(−13)
3
x3 = 7 +
11
38
× 4.
Collecting together and tidying up the final equations, we have
(I)
(II)
(III)
3x1
−20x2
38x2
+x3
−13x3
x3
= 12,
= 12,
= 2.
Starting with (III) and working backwards, it is now a simple matter to obtain
x1 = 10,
x2 = 1,
x3 = 2. 27.3.2 Gauss–Seidel iteration
In the example considered in the previous subsection an explicit way of solving
a set of simultaneous equations was given, the accuracy obtainable being limited
only by the rounding errors in the calculating facilities available, and the calculation was planned to minimise these. However, in some situations it may be that
only an approximate solution is needed. If, for a large number of variables, this is
996
27.3 SIMULTANEOUS LINEAR EQUATIONS
the case then an iterative method may produce a satisfactory degree of precision
with less calculation. Such a method, known as Gauss–Seidel iteration, is based
upon the following analysis.
The problem is again that of finding the components of the column matrix x
that satisfies
Ax = b
(27.23)
when A and b are a given matrix and column matrix, respectively.
The steps of the Gauss–Seidel scheme are as follows.
(i) Rearrange the equations (usually by simple division on both sides of each
equation) so that all diagonal elements of the new matrix C are unity, i.e.
(27.23) becomes
Cx = d,
(27.24)
where C = I − F, and F has zeros as its diagonal elements.
(ii) Step (i) produces
Fx + d = Ix = x,
(27.25)
and this forms the basis of an iteration scheme,
xn+1 = Fxn + d,
(27.26)
where xn is the nth approximation to the required solution vector ξ.
(iii) To improve the convergence, the matrix F, which has zeros on its leading
diagonal, can be written as the sum of two matrices L and U that have
non-zero elements only below and above the leading diagonal, respectively:
#
Fij if i > j,
Lij =
0
otherwise,
#
Uij =
(27.27)
Fij
if i < j,
0
otherwise.
This allows the latest values of the components of x to be used at each
stage and an improved form of (27.26) to be obtained:
xn+1 = Lxn+1 + Uxn + d.
(27.28)
To see why this is possible, we note, for example, that when calculating,
say, the fourth component of xn+1 , its first three components are already
known, and, because of the structure of L, these are the only ones needed
to evaluate the fourth component of Lxn+1 .
997
NUMERICAL METHODS
n
1
2
3
4
5
6
7
x1
2
4
12.76
9.008
10.321
9.902
10.029
x2
2
0.1
1.381
0.867
1.042
0.987
1.004
x3
2
1.34
2.323
1.881
2.039
1.988
2.004
Table 27.6 Successive approximations to the solution of simultaneous equations (27.29) using the Gauss–Seidel iteration method.
Obtain an approximate solution to the simultaneous equations
x1
3x1
−x1
+6x2
−20x2
+3x2
−4x3
+x3
+5x3
= 8,
= 12,
= 3.
(27.29)
These are the same equations as were solved in subsection 27.3.1.
Divide the equations by 1, −20 and 5, respectively, to give
x1 + 6x2 − 4x3 = 8,
−0.15x1 + x2 − 0.05x3 = −0.6,
−0.2x1 + 0.6x2 + x3 = 0.6.
Thus, set out in matrix form, (27.28)



0
x1
 x2 
=  0.15
x3
0.2
n+1

0
+ 0
0
is, in this case, given by


0
0
x1
0
0   x2 
x3
−0.6 0
n+1




−6
4
8
x1
0
0.05   x2  +  −0.6  .
0
0
0.6
x3
n
Suppose initially (n = 1) we guess each component to have the value 2. Then the successive
sets of values of the three quantities generated by this scheme are as shown in table 27.6.
Even with the rather poor initial guess, a close approximation to the exact result, x1 = 10,
x2 = 1, x3 = 2, is obtained in only a few iterations. 27.3.3 Tridiagonal matrices
Although for the solution of most matrix equations Ax = b the number of
operations required increases rapidly with the size N × N of the matrix (roughly
as N 3 ), for one particularly simple kind of matrix the computing required increases
only linearly with N. This type often occurs in physical situations in which objects
in an ordered set interact only with their nearest neighbours and is one in which
only the leading diagonal and the diagonals immediately above and below it
998
27.3 SIMULTANEOUS LINEAR EQUATIONS
contain non-zero entries. Such matrices are known as tridiagonal matrices. They
may also be used in numerical approximations to the solutions of certain types
of differential equation.
A typical matrix equation involving a tridiagonal matrix is as follows:
0
b1 c1
a2 b2 c2
a3 b3 c3
x3
..
y3
=
.
aN–1 bN–1 cN–1
xN– 1
yN–1
xN
yN
.
aN bN
(27.30)
...
0
.
..
y1
y2
...
..
x1
x2
So as to keep the entries in the matrix as free from subscripts as possible, we
have used a, b and c to indicate subdiagonal, leading diagonal and superdiagonal
elements, respectively. As a consequence, we have had to change the notation for
the column matrix on the RHS from b to (say) y.
In such an equation the first and last rows involve x1 and xN , respectively, and
so the solution could be found by letting x1 be unknown and then solving in turn
each row of the equation in terms of x1 , and finally determining x1 by requiring
the next-to-last line to generate for xN an equation compatible with that given
by the last line. However, if the matrix is large this becomes a very cumbersome
operation, and a simpler method is to assume a form of solution
xi−1 = θi−1 xi + φi−1 .
(27.31)
Since the ith line of the matrix equation is
ai xi−1 + bi xi + ci xi+1 = yi ,
we must have, by substituting for xi−1 , that
(ai θi−1 + bi )xi + ci xi+1 = yi − ai φi−1 .
This is also in the form of (27.31), but with i replaced by i+1. Thus the recurrence
formulae for θi and φi are
θi =
−ci
,
ai θi−1 + bi
φi =
yi − ai φi−1
,
ai θi−1 + bi
(27.32)
provided the denominator does not vanish for any i. From the first of the matrix
equations it follows that θ1 = −c1 /b1 and φ1 = y1 /b1 . The equations may now
be solved for the xi in two stages without carrying through an unknown quantity.
First, all the θi and φi are generated using (27.32) and the values of θ1 and φ1 ,
then, as a second stage, (27.31) is used to evaluate the xi , starting with xN (= φN )
and working backwards.
999
Fly UP