Você está na página 1de 38

Convergence Criteria

1
2
Iterative Solution Methods
Starts with an initial approximation for the
solution vector (x
0
)
At each iteration updates the x vector by using
the sytem Ax=b
During the iterations A, matrix is not changed
so sparcity is preserved
Each iteration involves a matrix-vector product
If A is sparse this product is efficiently done
3
Iterative solution procedure
Write the system Ax=b in an equivalent form
x=Ex+f (like x=g(x) for fixed-point iteration)
Starting with x
0
, generate a sequence of
approximations {x
k
} iteratively by
x
k+1
=Ex
k
+f
Representation of E and f depends on the type
of the method used
But for every method E and f are obtained from
A and b, but in a different way
4
Convergence
As k, the sequence {x
k
} converges to the
solution vector under some conditions on E
matrix
This imposes different conditions on A matrix
for different methods
For the same A matrix, one method may
converge while the other may diverge
Therefore for each method the relation
between A and E should be found to decide on
the convergence
5
Different Iterative methods
Jacobi Iteration
Gauss-Seidel Iteration
Successive Over Relaxation (S.O.R)
SOR is a method used to accelerate the
convergence
Gauss-Seidel Iteration is a special case of SOR
method
6
Jacobi iteration
n n nn n n
n n
n n
b x a x a x a
b x a x a x a
b x a x a x a
= + + +
= + + +
= + + +

2 2 1 1
2 2 2 22 1 21
1 1 2 12 1 11
(
(
(
(
(

=
0
0
2
0
1
0
n
x
x
x
x

) (
1
0
1
0
2 12 1
11
1
1 n n
x a x a b
a
x =
(

= + =
+
1
1 1
1
1
i
j
n
i j
k
j ij
k
j ij i
ii
k
i
x a x a b
a
x
) (
1
0
1 1
0
2 2
0
1 1
1

=
n nn n n n
nn
n
x a x a x a b
a
x
) (
1
0
2
0
3 23
0
1 21 2
22
1
2 n n
x a x a x a b
a
x =
7
x
k+1
=Ex
k
+f iteration for Jacobi method
A can be written as A=L+D+U (not decomposition)

(
(
(

+
(
(
(

+
(
(
(

=
(
(
(

0 0 0
0 0
0
0 0
0 0
0 0
0
0 0
0 0 0
23
13 12
33
22
11
32 31
21
33 32 31
23 22 21
13 12 11
a
a a
a
a
a
a a
a
a a a
a a a
a a a
(

=

+ =

=
+
n
i j
k
j ij
i
j
k
j ij i
ii
k
i
x a x a b
a
x
1
1
1
1
1
x
k+1
=-D
-1
(L+U)x
k
+D
-1
b
E=-D
-1
(L+U)
f=D
-1
b
Ax=b (L+D+U)x=b
Dx
k+1
=-(L+U)x
k
+b

k k
Ux Lx
Dx
k+1
8
Gauss-Seidel (GS) iteration
n n nn n n
n n
n n
b x a x a x a
b x a x a x a
b x a x a x a
= + + +
= + + +
= + + +

2 2 1 1
2 2 2 22 1 21
1 1 2 12 1 11
(
(
(
(
(

=
0
0
2
0
1
0
n
x
x
x
x

= + =
+ +
1
1 1
1 1
1
i
j
n
i j
k
j ij
k
j ij i
ii
k
i
x a x a b
a
x
) (
1
0
1
0
2 12 1
11
1
1 n n
x a x a b
a
x =
) (
1
1
1 1
1
2 2
1
1 1
1

=
n nn n n n
nn
n
x a x a x a b
a
x
) (
1
0
2
0
3 23
1
1 21 2
22
1
2 n n
x a x a x a b
a
x =
Use the latest
update
9
x
(k+1)
=Ex
(k)
+f iteration for Gauss-Seidel
(

= + =
+ +
1
1 1
1 1
1
i
j
n
i j
k
j ij
k
j ij i
ii
k
i
x a x a b
a
x
x
k+1
=-(D+L)
-1
Ux
k
+(D+L)
-1
b
E=-(D+L)
-1
U
f=-(D+L)
-1
b
Ax=b (L+D+U)x=b
(D+L)x
k+1
=-Ux
k
+b

k 1 k
Ux Lx
+
Dx
k+1
10
Comparison
Gauss-Seidel iteration converges more rapidly
than the Jacobi iteration since it uses the latest
updates
But there are some cases that Jacobi iteration
does converge but Gauss-Seidel does not
To accelerate the Gauss-Seidel method even
further, successive over relaxation method can
be used
11
Successive Over Relaxation Method
GS iteration can be also written as follows
k
i
k
i
k
i
i
j
n
i j
k
j ij
k
j ij i
ii
k
i
k
i
x x
x a x a b
a
x x
o + =
(

+ =
+

= =
+ +

1
1
1
1 1
1
Correction term
0
i
x
1
i
x
2
i
x
3
i
x
0
i
o
1
i
o
2
i
o
0
i
eo
1
i
eo
2
i
eo
1 > e
Multiply with
Faster
convergence
12
SOR
(

+ =
(

+ =
+ =

= + =
+ +

= =
+ +
+
1
1 1
1 1
1
1
1 1
1
1
) 1 (
1
i
j
n
i j
k
j ij
k
j ij i
ii
k
i
k
i
i
j
n
i j
k
j ij
k
j ij i
ii
k
i
k
i
k
i
k
i
k
i
x a x a b
a
x x
x a x a b
a
x x
x x
e e
e
eo
1<e<2 over relaxation (faster convergence)
0<e<1 under relaxation (slower convergence)
There is an optimum value for e
Find it by trial and error (usually around 1.6)
13
x
(k+1)
=Ex
(k)
+f iteration for SOR
(

+ =

= + =
+ +
1
1 1
1 1
1
) 1 (
i
j
n
i j
k
j ij
k
j ij i
ii
k
i
k
i
x a x a b
a
x x e e
Dx
k+1
=(1-e)Dx
k
+eb-eLx
k+1
-eUx
k
(D+ eL)x
k+1
=[(1-e)D-eU]x
k
+eb
E=(D+ eL)
-1
[(1-e)D-eU]

f= e(D+ eL)
-1
b
14
The Conjugate Gradient Method
i i i i
i
T
i
i
T
i
i
i i i i
i
T
i
i
T
i
i
d r d
r r
r r
Ad x x
Ad d
r r
Ax b r d
1 1 1
1 1
1
1
0 0 0
+ + +
+ +
+
+
| + =
= |
o + =
= o
= =
Converges if A is a
symmetric positive
definite matrix
Convergence is
faster
15
Convergence of Iterative Methods
x

Define the solution vector as


Define an error vector as
k
e
x e x
k k

+ =
Substitute this into
f Ex x
k k
+ =
+1
k k k
Ee f x E f x e E x e + + = + + = +
+

1
0 ) 1 ( 2 1 1
e E EEEe EEe Ee e
k k k k k + +
= = = =
16
Convergence of Iterative Methods
0 ) 1 ( 0 ) 1 ( 1
e E e E e
k k k + + +
s =
Convergence condition
0 Lim 0 Lim
) 1 ( 1

+

+

k
k
k
k
E e if
iteration
power
The iterative method will converge for any initial
iteration vector if the following condition is satisfied
17
Norm of a vector
A vector norm should satisfy these conditions
y x y x
x x
x x
x x
+ s +
=
=
>
scalar for
vector zero a is iff
vector nonzero every for
o
0
0
Vector norms can be defined in different forms as
long as the norm definition satisfies these conditions
18
Commonly used vector norms
n
x x x x + + + =
2 1
1
Sum norm or
1
norm
Euclidean norm or
2
norm
2 2
2
2
1
2
n
x x x x + + + =
Maximum norm or

norm
i i
x x max =

19
Norm of a matrix
A matrix norm should satisfy these conditions
B A B A
A A
A A
A
+ s +
=
=
>
scalar for
matrix zero a is iff
o
0
0
Important identitiy
vector a is x x A Ax s
20
Commonly used matrix norms
Maximum column-sum norm or
1
norm
Spectral norm or
2
norm
Maximum row-sum norm or

norm

=
s s
=
m
i
ij
n j
a A
1
1
1
max
A A A
T
of eigenvalue maximum =
2

=
s s

=
n
j
ij
m i
a A
1
1
max
21
Example
Compute the
1
and

norms of the matrix


(
(
(

1 8 6
4 2 7
5 9 3
17
13
15

= A
16
19 10
1
A =
22
Convergence condition
0 lim 0 lim
) 1 ( 1

+

+

k
k
k
k
E e if
Express E in terms of modal matrix P and A
A:Diagonal matrix with eigenvalues of E on the diagonal
1 ) 1 ( ) 1 (
1 1 1 ) 1 (
1
+ +
+

A =
A A A =
A =
P P E
P P P P P P E
P P E
k k
k

(
(
(
(
(

= A
+
+
+
+
1
1
2
1
1
1
k
n
k
k
k

,...,n , i
P P E
i
k
i
k
k
k
k
k
k
k
2 1 for 1 0 lim
0 lim 0 lim 0 lim
1
) 1 ( 1 ) 1 ( ) 1 (
= <
A A
+

+

+

+

23
Sufficient condition for convergence
If the magnitude of all eigenvalues of iteration matrix
E is less than 1 than the iteration is convergent
E x E x
x E Ex
x Ex
x Ex
s s

s
=
=

It is easier to compute the norm of a matrix than


to compute its eigenvalues
1 < E
is a sufficient condition for convergence
24
Convergence of Jacobi iteration
E=-D
-1
(L+U)
(
(
(
(
(
(
(
(
(
(

0
0
0
1 1
1 1
1
22
2
22
23
22
21
11
1
11
12
nn
nn
nn
n
n n
n n
n
n
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
E



25
Convergence of Jacobi iteration

=
=
=
=

>
= < <
n
j i
j
ij ii
n
j i
j
ii
ij
a a
a
a
E
1
1
1 1

n 1,2,..., i for
Evaluate the infinity(maximum row sum) norm of E
Diagonally dominant matrix
If A is a diagonally dominant matrix, then Jacobi
iteration converges for any initial vector
26
Stopping Criteria
Ax=b
At any iteration k, the residual term is
r
k
=b-Ax
k
Check the norm of the residual term
||b-Ax
k
||
If it is less than a threshold value stop

27
Example 1 (Jacobi Iteration)
(
(
(

=
(
(
(

(
(
(

15
21
7
5 1 2
1 8 4
1 1 4
3
2
1
x
x
x
(
(
(

=
0
0
0
0
x
5
2 15
8
4 21
4
7
0
2
0
1
1
3
0
3
0
1
1
2
0
3
0
2
1
1
x x
x
x x
x
x x
x
+
=
+ +
=
+
=
0 . 3
5
15
625 . 2
8
21
75 . 1
4
7
= =
= =
= =
7395 . 26
2
0
= Ax b
0452 . 10
2
1
= Ax b
Diagonally dominant matrix
28
Example 1 continued...
5
2 15
8
4 21
4
7
1
2
1
1
2
3
1
3
1
1 2
2
1
3
1
2 2
1
x x
x
x x
x
x x
x
+
=
+ +
=
+
=
225 . 4
5
625 . 2 75 . 1 2 15
875 . 3
8
3 75 . 1 4 21
65625 . 1
4
3 625 . 2 7
=
+
=
=
+ +
=
=
+
=
7413 . 6
2
2
= Ax b
8875 . 2
5
875 . 3 65625 . 1 2 15
98125 . 3
8
225 . 4 65625 . 1 4 21
6625 . 1
4
225 . 4 875 . 3 7
3
3
3
2
3
1
=
+
=
=
+ +
=
=
+
=
x
x
x
9534 . 1
2
2
= Ax b
Matrix is diagonally dominant, Jacobi iterations are converging
29
Example 2
(
(
(

=
(
(
(

(
(
(

7
21
15
1 1 4
1 8 4
5 1 2
3
2
1
x
x
x
(
(
(

=
0
0
0
0
x
7395 . 26
2
0
= Ax b
0
2
0
1
1
3
0
3
0
1 1
2
0
3
0
2 1
1
4 7
8
4 21
2
5 15
x x x
x x
x
x x
x
+ =
+ +
=
+ +
=
0 . 7
625 . 2
8
21
5 . 7
2
15
=
= =
=

=
The matrix is not diagonally dominant
8546 . 54
2
1
= Ax b
30
Example 2 continued...
625 . 39 625 . 2 5 . 7 4 7
25 . 0
8
7 5 . 7 4 21
3125 . 11
2
7 5 625 . 2 15
1
3
1
2
1
1
= + + =
=
+
=
=
+ +
=
x
x
x
3761 . 208
2
2
= Ax b
The residual term is increasing at each iteration,
so the iterations are diverging.
Note that the matrix is not diagonally dominant
31
Convergence of Gauss-Seidel iteration
GS iteration converges for any initial vector if A
is a diagonally dominant matrix
GS iteration converges for any initial vector if A
is a symmetric and positive definite matrix
Matrix A is positive definite if
x
T
Ax>0 for every nonzero x vector
32
Positive Definite Matrices
A matrix is positive definite if all its eigenvalues
are positive
A symmetric diagonally dominant matrix with
positive diagonal entries is positive definite
If a matrix is positive definite
All the diagonal entries are positive
The largest (in magnitude) element of the whole
matrix must lie on the diagonal
33
Positive Definitiness Check
(
(
(

5 2 25
2 15 12
25 12 20
(
(
(

25 2 5
2 15 12
5 12 20
Not positive definite
Largest element is not on the diagonal
Not positive definite
All diagonal entries are not positive
(
(
(

25 2 5
2 15 12
5 12 20
Positive definite
Symmetric, diagonally dominant, all
diagonal entries are positive
34
Positive Definitiness Check
(
(
(

25 2 8
2 15 12
5 12 20
A decision can not be made just by investigating
the matrix.
The matrix is diagonally dominant and all diagonal
entries are positive but it is not symmetric.
To decide, check if all the eigenvalues are positive
35
Example (Gauss-Seidel Iteration)
(
(
(

=
(
(
(

(
(
(

15
21
7
5 1 2
1 8 4
1 1 4
3
2
1
x
x
x
(
(
(

=
0
0
0
0
x
5
2 15
8
4 21
4
7
1
2
1
1
1
3
0
3
1
1 1
2
0
3
0
2 1
1
x x
x
x x
x
x x
x
+
=
+ +
=
+
=
0 . 3
5
5 . 3 75 . 1 2 15
5 . 3
8
75 . 1 4 21
75 . 1
4
7
=
+
=
=
+
=
= =
7395 . 26
2
0
= Ax b
0414 . 3
2
1
= Ax b
Diagonally dominant matrix
0452 . 10
2
1
= Ax b
Jacobi iteration
36
Example 1 continued...
5
2 15
8
4 21
4
7
2
2
2
1
2
3
1
3
2
1 2
2
1
3
1
2 2
1
x x
x
x x
x
x x
x
+
=
+ +
=
+
=
9625 . 2
5
9375 . 3 875 . 1 2 15
9375 . 3
8
3 875 . 1 4 21
875 . 1
4
3 5 . 3 7
=
+
=
=
+ +
=
=
+
=
4765 . 0
2
2
= Ax b
7413 . 6
2
2
= Ax b
Jacobi iteration
When both Jacobi and Gauss-Seidel iterations
converge, Gauss-Seidel converges faster
37
Convergence of SOR method
If 0<e<2, SOR method converges for any
initial vector if A matrix is symmetric and
positive definite
If e>2, SOR method diverges
If 0<e<1, SOR method converges but the
convergence rate is slower (deceleration) than
the Gauss-Seidel method.
38
Operation count
The operation count for Gaussian Elimination or
LU Decomposition was 0 (n
3
), order of n
3
.
For iterative methods, the number of scalar
multiplications is 0 (n
2
) at each iteration.
If the total number of iterations required for
convergence is much less than n, then iterative
methods are more efficient than direct
methods.
Also iterative methods are well suited for
sparse matrices

Você também pode gostar