Escolar Documentos
Profissional Documentos
Cultura Documentos
Systems of Equations
Many practical problems could be reduced to solving a linear system of equations
formulated as Ax = b. This chapter discusses the computational issues about solving
Ax = b.
A Linear System of Equations
Vector and Matrix Norms
Matrix Condition Number (Cond(A) = A A1 )
Sensitity and Error Bounds
Direct Solution for Ax = b
LU-decomposition by Gaussian Elimination
Gaussian Elimination with Partial Pivoting
Crout Algorithm for A = LU
Cholesky Algorithm for A = LLt (A is positive denite)
Iterative Solutions
Jacobi method
Gauss-Seidel method
Other methods
Applications
a12 x2 + +
a1n xn = b1
a21 x1 +
a22 x2 + +
a2n xn = b2
(A)
x +
y =
y =
x + y = 3
(B)
x + 2y = 2
x y = 1
x +
(C)
2x + y = 5
y =
2x + 2y =
y = 2
(A) has no solution, (B) has unique solution, (C) has innitely many solutions
x + 2y +
z = 1
x + 2y + z = 5
(D)
(E)
2x + 4y + 2z =
2x
y + z = 3
H
older norm (p-norm) xp = (
(p=1) x1 =
n
(p=2) x2 = (
i=1
n
i=1
|xi |p )1/p f or p 1.
i=1 |xi |
2 1/2
(Euclidean distance)
m n
i=1
j=1
a2ij
1/2
(Fr
obenius norm)
m
i=1
n
|aij |)
j=1 |aij |
(3) Let A Rnn be real symmetric, then A2 = max1in |i |, where i (A)
A=
2 1 1
1
3 1 4
A1 =
0.5
1.5
0.5
0.5
2.5
0.5
0.5 0.5
0.5
b = b + b. Then
b = Ax A x or
x
b
A
x
b
A
A1 b
Cond(A)
x
b
b
x
b E
Cond(A)
+
x
b
A
y +
z =
4x 6y
= 2
2x + 7y + 2z =
A matrix representation
Ax = b, or
2
4
2
6 0
7
A = [2, 1, 2; 4, 6, 0; 2, 7, 2] ;
b = [5; 2; 9];
x
y
z
5
2
9
E1 =
0 1 0
1 0 0
E2 =
0 0 1
1 0
0 1
0 0 2
E3 =
1 0 0
0 1 0
1 0 1
Example
A=
6 0
4
2
E1 A =
6 0
4
2
E2 A =
4 14 4
E3 A =
L3 =
1 0 0
0 1 0
0 1 1
L2 =
1 0 0
0 1 0
L1 =
1 0 1
Then
L3 L2 L1 A =
0 0
2 1 0
0
0 8 2
0
A=
0 1
= U (Upper )
2
4
2
6 0
7
4 6 0
Let
LU-Decomposition
If a matrix A can be decomposed into the product of a unit lower- matrix L and
an upper- matrix U, then the linear system Ax = b can be written as LUx = b. The
problem is reduced to solving two simpler triangular systems Ly = b and Ux = y by
forward and back substitutions.
A=
6 0
4
2
L3 =
L3 L2 L1 A =
A=
0 1 0
L2 =
0 1 1
Then
where
1 0 0
2
4
6 0
0 1 0
L1 =
1 1
A = L1
1 L2 L3 U = LU
=U
1 1 1
0 8 2
0
= LU
B=
b=
1
0
5
x=
1
2
1
0 0
2 1 0
1 0 1
0 8 2
0
1 0 0
0 1
n(n1)
2
10
n(n1)(2n1)
6
n(n1)(2n1)
6
A=
2 5
1 0
P23 P13 A = LU =
11
1 0 0
1
2
2 5
9
2
12
n
B = max1in
Example:
A=
10
C=
Then
B=C
M=
(0)
= Bx
+c=
= max1in
b=
10
M =
0.1
0
0.2
1.1
1.2
11
12
10
(1)
|bij |
10
j=1
2
Let
j=i
j=i
(0)
, x
0
0
|aij |
< 1
|aii |
c=C
b=
(2)
, x
13
0.98
0.98
1.1
1.2
(3)
, x
1.002
1.004
Gauss-Seidel Method
Given Ax = b, write A = C M, where C is nonsingular and easily invertible.
Jacobi: C = diag(a11 , a22 , , ann ), M = (A C)
Gauss-Seidel: A = (DL)U = C M, where C = DL, M = U for Jacobi
x1
(k+1)
xi
1
a11
1
aii
n
j=2
i1
(k)
a1j xj + b1
(k+1)
j=1 aij xj
n
j=i+1
(k)
aij xj + bi , f or i = 2, 3, , n
The dierence between Jacobi and Gauss-Seidel iteration is that in the latter case, one
is using the coordinates of x(k+1) as soon as they are calculated rather than in the next
iteration. The program for Gauss-Seidel is much simpler.
14
Theorem: If A is diagonally dominant, i.e., |aii | > j=i |aij | for 1 i n. Then
Gauss-Seidel iteration converges to a solution of Ax = b.
Proof: Denote A = (D L) U and let
j =
j1
i=1
|aji|, j =
n
|aji|, and Rj =
i=j+1
j
|ajj | j
j
|ajj | j
<
=1 1jn
|ajj | j
|ajj | j
1
akk
k1
i=1
y = |yk |
aki yi
1
|akk |
n
i=k+1 aki xi
(k y + k x )
k
|akk |k
15
= Rk < 1
10
A=
2
10
10
b=
11
12
2 0
, x
0 1
0
(0)
0
0
(0)
1
(a12 x2
10
+ b1 ) = 1.1
(1)
(1)
1
(a21 x1
10
0 + b2 ) = 0.98
x2 =
(2)
(1)
1
(a12 x2
10
+ b1 ) = 1.002
(2)
(2)
1
(a21 x1
10
0 + b2 ) = 0.9996
x1 =
x2 =
Thus
(1)
x1 =
Moreover,
10
Then
16
= DLU