Escolar Documentos
Profissional Documentos
Cultura Documentos
3.33 Introduction
Once the system of equations has been developed and the initial and boundary
conditions are identified, the remaining task is the solution of the equations to
determine the future distribution of pressure and saturation in the reservoir.
In this chapter, the objective is to demonstrate how the coefficients of the flow
equations are constructed leading to a linear set of equations; formation of the
coefficient matrices based on this set of linear algebraic equations;
characteristics of coefficient matrices; solution techniques and some special
topics.
Tn n n+1 n n n+1
oi – 1--- + T wi – 1--- ( Poi – 1 – Poi ) 1 ( P oi + 1 – P oi )
+ T 1 + T
oi + --- wi + ---
2 2 2 2
+ T 1 + T
n n n+1 n n n+1 (1)
oj – --- 1 ( P oj – 1 – P oj ) + T 1+T
1 ( P oj + 1 – P oj )
wj – --- oj + --- wj + ---
2 2 2 2
Tn n n+1 n n n+1
ok – 1--- + T wk – 1--- ( P ok – 1 – P ok ) 1 ( P ok + 1 – P ok )
+ T 1+T
ok + --- wk + ---
2 2 2 2
= Qo + Qw
These directions are shown in Figure 3-48. Then, the above equation
can be written in the following form:
j+1
k+1
East Coefficient:
E = T 1 + T
n n
oi + --- wi + ---
1
(2)
2 2
West Coefficient:
W = T 1 + T
n n
oi – --- wi –
1
--
- (3)
2 2
North Coefficient:
N = T 1 + T
n n
wj + --- (4)
1
oj + ---
2 2
South Coefficient:
S = T 1 + T
n n
oj – --- wj – ---
1 (5)
2 2
SW = T
n n
+T (6)
ok – 1--- wk – ---
1
2 2
NE = T
n n
1--- + T 1--- (7)
ok + wk +
2 2
( SW )Pok – 1 + ( S )P oj – 1 + ( W )P oi – 1 + ( C )Pi, j, k
(8)
+ ( E )Poi + 1 + ( N )Poj + 1 + ( NE )Pok + 1 = RHS
( S )Poj – 1 + ( W )P oj – 1 + ( C )Pi, j, k
+ ( E )Poi + 1 + ( N )Poj + 1 = RHS (9)
where C = - (S + W + E + N).
where C = - (W + E).
The Equations (8), (9) and (10) can be represented in the following
matrix equation form:
Ax = b (11)
Where A is the coefficient matrix, b is the right hand side vector, and
x is the unknown vector.
Oil Equation:
Water Equation:
δP o δP o δPo
+ PSWO SSWO + PSO SSO + PWO SWO (14)
PSWW SSWW δS w PSW SSW δS w PWW SWW δS w
δPo RHS o
+ PNEO SNEO =
PNEW SNEW δS w RHS w
It is clear from the above equation that for each grid block, two
equations are solved. This is what makes the approach more
expensive than IMPES in terms of linear solver cost requirements.
The above Equation can be represented in the following form:
1 2
2 1 3
3 2 4
4 3 5
5 4 6
6 5 7
7 6 8
8 7 9
9 8 10
10 9
This address link and the resulting tridiagonal matrix form can be
related directly to the linearized 1-D flow equation above, which
is of the form:
1 2 3 4 5 6 7 8 9 10
I Coefficient
1 CE (W does not exist since i = 1 = 0)
2 WCE
3 WCE
4 WCE
5 WCE
6 WCE
7 WCE
8 WCE
9 WCE
10 WC (E does not exist since i + 1 = n + 1)
If we compare the coefficients with the address link, it is clear that when
we add the diagonal term to the number of entries in the address link, then
the sum will be equal to the number of entries in coefficient list. The posi-
tions of the coefficients are defined by the address links, which are also the
column numbers for the rows identified by the diagonal number. The Ma-
trix form is given in Figure 3-50.
2. 2-Dimensional Case: The shape of the domain, the numbering of the grid
blocks and the corresponding matrix form is given in Figure 3-51. There
are 9 grid blocks in that areal domain. For 20 grid block case (4x5 system)
(See Figure 3-52)
1 2 5
2 1 3 6
3 2 4 7
4 3 8
5 1 6 9
6 2 5 7 10
7 3 6 8 11
8 4 7 12
9 5 10 13
10 6 9 11 14
11 7 10 12 15
12 8 11 16
13 9 14 17
14 10 13 15 18
15 11 14 16 19
16 12 15 20
17 13 18
18 14 17 19
19 15 18 20
20 16 19
This address link and the resulting pentadiagonal (one main and 4 co-
diagonals) matrix form can be related directly to the linearized 2-D flow
equation above, which is of the form:
Blk i-1 j-1 i,j j+1 i+1 i-1 j-1 i,j j+1 i+1
No
1 0 0 1,1 2 2 C N E
2 0 1 1,2 3 2 S C N E
3 0 2 1,3 4 2 S C N E
4 0 3 1,4 0 2 S C
5 1 0 2,1 2 3 W C N E
6 1 1 2,2 3 3 W S C N E
7 1 2 2,3 4 3 W S C N E
8 1 3 2,4 0 3 W S C E
9 2 0 3,1 2 4 W C N E
10 2 1 3,2 3 4 W S C N E
11 2 2 3,3 4 4 W S C N E
12 2 3 3,4 0 4 W S C E
13 3 0 4,1 2 5 W C N E
14 3 1 4,2 3 5 W S C N E
15 3 2 4,3 4 5 W S C N E
16 3 3 4,4 0 5 W S C E
17 4 0 5,1 2 0 W C N
18 4 1 5,2 3 0 W S C N
19 4 2 5,3 4 0 W S C N
20 4 3 5,4 0 0 W S C
The address links and the position of the coefficients are in full
agreement. This aspect of the coefficients make them easy to
program.
• They are diagonally dominant, i.e. the diagonal term is either greater
than or equal to the negative sum of the off-diagonal terms.
• Their incidence matrices are symmetrical with respect to the main diag-
onal. Depending on the weighting schemes, the entries can be symmet-
rical too.
• Since the number of equations is always equal to the number of
unknowns, the matrix is square.
There are two main types of solution techniques for the coefficient matrices of
the linearized flow equations, namely:
• Direct Methods
• Iterative Methods
a 11 a 12 a 13 x 1 b1
a 21 a 22 a 23 x 2 = b 2
a 31 a 32 a 33 x 3 b3
Elimination Process:
a 11 a12 a 13 a 11 a12 a 13 a 11 a 12 a 13
a 21 a22 a 23 0 a22 a 23 0 a 22 a 23
a 31 a32 a 33 0 a32 a 33 0 0 a 33
b1 b1 b1
b2 b2 b2
b3 b3 b3
where
a 12 a 21 a13 a 21
a 22 = a 22 – -------------- -, a 23 = a 23 – ---------------
a 11 a 11
a 12 a 31 a 13 a 31
a 32 = a 32 – --------------- , a 33 = a 33 – --------------
-
a 11 a 11
b1 a 21 b 1 a 31
-, b 3 = b3 – ------------
b 2 = b 1 – ------------ -
a 11 a 11
One can explain the terms with = cap using the same technique. The
important feature of Gaussian Elimination here is that at each stage of
the calculations, the previous row is multiplied with a pivot element
and subtracted from the following equations. This subtraction
process is of particular importance and the main cause of extra
storage requirements as we shall see in the next section.
Let A be the sparse coefficient matrix. The original non zero structure
of this matrix is defined by Nonz(A) = {(i,j) aij ≠ 0 and i ≠ j }. Let F
be the matrix with original and created non zero, then Fill (A) = Nonz
(F) - Nonz (A).
x x x x
x x 0 0
x 0 x 0
x 0 0 x
x 0 0 x
0 x 0 x
0 0 x x
x x x x
1 11 2 12
13 3 14 4
5 15 6 16
17 7 18 8
9 19 10 20
From Figure 3-55, it is clear that, in applying Gaussian elimination, only the
lower half of the matrix need to be triangularized. Hence, when we compare
this with standard ordering, it is obvious that the amount of arithmetical
operations and the storage requirements are cut at least by half.
Figure 3-56 shows the pattern of the matrix after the triangularization process
where N represents new nonzeroes, and the are accumulated around the
diagonal.
Figure 3-57 shows the relative performance of these two ordering schemes,
where W3 is the arithmetical operations required inverting D4 and W1 is for
standard ordering. As the number of grid blocks increase, there is a rapid initial
decline in the ratio of W3 to W1. After some grid size, it remains almost
constant. Another interesting observation is as the ratio between the number of
grid blocks in x-direction and y-direction decrease, D4 becomes more efficient.
1. Scheme-I: (Figure 3-58, Panel a): Number rows according to the number of
non-zero off-diagonal terms before the actual elimination process takes
place. In this scheme, the rows with only one off-diagonal terms are num-
bered first, those with two terms second, and so on, with those with most
terms being last.
2. Scheme-II: (Figure 3-58, Panel b): Number the rows so that at each step of
the elimination process the next row to be operated upon is the one with the
fewest non-zero terms. If more than one row meets this criterion, any one
of them is selected. This scheme warrants the examination of the effects of
the non-zero term accumulation on the elimination process.
3. Scheme-III: (Figure 3-58, Panel-c): Number the rows so that at each step of
the process, the next row to be operated upon is the one that will introduce
the fewest non-zero terms. If more than one row meets this criterion, select
anyone. This process involves trial test of every feasible alternative of the
elimination process at each step.
F(x)
The oil industry has used many types of iterative solvers. We will only give the
names of these techniques, and brief explanation where necessary. Iterative
solvers are unable to cope with the increasing complexity of the reservoir
engineering problems modeled by the simulators.
AX = RHS (16)
Table 3-2: Column and Row Addresses for the Diagonal and Co-Diagonal Entries.
D E W N S NE SW D E W N S NE SW
1 2 6 21 21 22 26 1
2 1 3 7 22 22 21 23 27 2
3 2 4 8 23 23 22 24 28 3
4 3 5 9 24 24 23 25 29 4
5 4 10 25 25 24 30 5
6 7 1 11 26 26 27 21 31 6
7 6 8 2 12 27 27 26 28 22 32 7
8 7 9 3 13 28 28 27 29 23 33 8
9 8 10 4 14 29 29 28 30 24 34 9
10 9 5 15 30 30 29 25 35 10
11 12 6 16 31 31 32 26 36 11
12 11 13 7 17 32 32 31 33 27 37 12
13 12 14 8 18 33 33 32 34 28 38 13
14 13 15 9 19 34 34 33 35 29 39 14
15 14 10 20 35 35 34 30 40 15
16 17 11 36 36 37 31 16
D E W N S NE SW D E W N S NE SW
17 16 18 12 37 37 35 38 32 17
18 17 19 13 38 38 36 39 33 18
19 18 20 14 39 39 37 40 34 19
20 19 15 40 40 38 35 20
D Main diagonal
A = D+E+W+S+N+SE+NW (18)
–1
B = ( Ω + SE )Ω ( Ω + NW ) (19)
–1
Ω = ( θ + S )θ ( θ + N ) (20)
–1
θ = ( Γ + E )Γ ( Γ + W ) (21)
–1 –1 –1
Γ = D – EΓ W – ∑ ( Sθ N + SEΩ NW ) (22)
where
Γ Diagonal Matrix
–1 –1 –1 –1
B = A + ( Sθ N + SEΩ NW ) – ∑ ( Sθ N + SEΩ NW ) (23)
–1 –1
B = A + ( Sθ N ) – ∑ ( Sθ N ) (24)
In one dimension
–1
B = ( Γ + E )Γ ( Γ + W ) (25)
The best choice for the preconditioning matrix must be the one that depends on
the structure of A. B in Equation (8) shows similar structure to A. The solution
of B can be performed as follows:
a. At the outermost level, the block triangular matrix equation as given below
is solved:
( Ω + SE )α = y (26)
–1
α = Ω ( y – SEα ) (27)
–1
β = θ ( y – Sβ ) (28)
It is clear from Equation (28) again that the right hand side contains
known values of the parameters. For example, Sβ is linked with the
solution from previous line.
Step 1: set iteration counter k to zero, and compute the initial guess X O to
the solution vector and compute the initial guess with the help of
Preconditioning matrix B, as follows:
XO = B-1RHS (29)
RO = RHS-AXO (30)
One can use the norm of the residual vector to assess the goodness of the
estimated solution. Procedure terminates if the estimated residual norm is
less than or equal to the preset tolerance criteria.
Dk = B-1Rk (31)
k
Step 4: Estimate the corresponding movement direction, D r , in residual
space
k k
D r = AG
If A and B were the same, then the exact solution could be found by
progressing the iteration along the direction vectors.
k k
X = X +D (32)
k k k k
R = B – A ( X + D ) = R – Dr = 0 (33)
( if φB – A = 0 )
k+1 k k k
X = X +α Q (34)
k
is minimized corresponding to a value of α , a constant. The
corresponding vector in residual space will be:
k k (35)
Z = AQ
n
k k k k–1
Q = Dr + ∑ βi Q (36)
i=1
n (37)
k k k k k–1
Z = AQ = D r + ∑ RHSi Z
i=1
k k–1
k –Dr Z
βi = ---------------------
- (38)
k–1 2
Z
Here, Qk, Qk-1,..., Qk-n are conjugate directions, and Z k, Zk-1,..., Zk-n
are mutually orthogonal residual space vectors.
k
Step 6: Determine α , which minimizes the norm of Rk+1
k+1 k k k (39)
R = R –α Z
2 2 k k 2 k2
Rk + 1 = Rk – 2α k R Z + αk Z (40)
and for
k k
k R Z
α = -----------2
Z
k (41)
k+1 k k k
X = X +α Q (42)
These will ensure that the next iteration will be done using the current
most significant Nstack -1 residuals.
Step 9: Check if the residuals are small enough. If they are not small
enough, go back to step 3.
It is clear from the CGM procedure outlined above that the residual at
each step of iteration is minimized over a n+1 dimensional space,
which is less than, or equal to stack length Nstack. As stack length
Nstack increases, the method becomes more robust. If it is the same as
the order of the matrix, theoretically the exact solution can be
obtained. However, greater the number of stacks, greater is the
storage requirements and the computational costs in general.
Fortunately, in the practice minimum of 3 to 4 is sufficient for many
problems. In case of difficult problems like coning problems,
AP i – 1 + BP i + CP i + 1 = D i (44)
P i = E i P i + 1 + Fi (45)
Let Pi-1 = Ei-1 Pi + Fi-1. Substitute this into the flow equation given above to
end up with:
D i – AFi – 1 C (46)
P i = --------------------------- – -------------------------P
AEi – 1 + B AEi – 1 + B i + 1
then
D i – AFi – 1 –C
Ei = --------------------------
- and ------------------------- (47)
AEi – 1 + B AE i – 1 + B
Apart from the general rules laid down in direct versus iterative technique
section, there are some rule of thumbs that one should be aware of in selecting
the solver type. These are summarized as follows:
1. Areal Models: If the order of the matrix is small, use a band solver. If the
reservoir grid block size intermediate is in the order of 800-1200 grid
blocks, use direct methods rather than band solvers like D4. If it is in the
order of 2000, one can use Scheme III of the sparse matrix techniques. For
anything more than 2000, one should definitely be using iterative tech-
niques with the current state of computational resources.
References
Appleyard, J.R. and I.M. Cheshire (1983): “Nested Factorization,” paper SPE
12264, presented at the 1983 SPE Symposium on Reservoir Simulation, San
Francisco, Nov. 15-18.
Appleyard, J.R., I.M. Cheshire and R.K. Pollard (1981): “Special Techniques
for Fully Implicit Simulators,” Proc., European Symposium on Enhanced Oil
Recovery, Bournemouth, England, pp. 395-408
Behie, A. and P.K.W. Vinsome (1982): “Block Interative Methods for Fully
Implicit Reservoir Simulation,” SPEJ, (Oct.), pp. 658-668.
Brameller, A., R.N. Allan and Y.M. Haman (1976): Sparsity, Pitman
Publishing, London.
Brietenbach, E.A., D.H. Thurnau and H.K. Van Poollen, (1969): “Solution of
the Immiscible Fluid Flow Simulation Equations,” SPEJ, (June), pp. 155-169.
Brown, K.E. and J.F. Lea (1985): “Nodal Systems Analysis of Oil and Gas
Wells,” Journal of Petroleum Technology, (Oct.), pp. 1751-1763.
Carnagan, B., H.A. Luther and J.O. Wilkes (1969): Applied Numerical
Methods, John Wiley Publishing Co., New York City.
Curtis, A.R. and J.K. Reid, (1971): “The Solution of Large Sparse
Unsymmetric Systems of Linear Equations,” J. Inst. Math Appl., v. 8, pp. 344-
353.
Gibbs, N.E., W.G. Poole, and P.K. Stockmeyer (1976): “An Algorithm for
Reducing the Bandwidth and Profile Reduction,” SIAM J. Num. Anal., (April)
v. 13, No. 2,.
Golub, G.H. and C.F. Van Loan (1983): Matrix Computations, Johns
Hopkins U. Press, Baltimore.
McDonald, A.E. and R.H. Trimble (1977): “Efficient Use of Mass Storage
During Elimination for Sparse Sets of Simultaneous Equations,” SPEJ, (Aug.),
pp. 300-316.
Meijerink, J.A. and H.A. van der Vorst (1977): “An Interative Solution Method
for Linear System of which the Coefficient Matrix is a Symmetric Matrix,”
Mathematics of Computation, (Jan.), v. 31, p. 148.
Nolen, J.S. D.W. Kuba and M.J. Kascic, Jr. (1979): “Application of Vector
Processors to the Solution of Finite Difference Equations,” SPE 7675,
Philps, W. “The Storage and Inversion of Large Sparse Matrices,” Math and
Stats Group IC, Wilmslow Rep.
Price, H.S. and Coats K.H., (1973): “Direct Methods in Reservoir Simulation,”
SPE 4278, paper presented at the Third SPE Symposium on Numerical
Simulation, Houston, TX, Jan. 10-12; JPT, (June 1974), pp. 295-308.
Quon, D., P.M. Dranchuk, S.R. Allada and P.K. Leung, (1966): “Application of
the Alternating Direction Explicit Procedure to Two-dimensional Natural Gas
Reservoirs,” Society of Petroleum Engineers Journal, (June): pp. 137-142;
Tans. AIME, 237.
Reid, J.K. (1971): “On the Method of Conjugate Gradients for the Solution of
Large Sparse Systems of Linear Equations,” Proceedings of the Conference
on Large Sparse sets of Linear Equations, Academic Press. pp. 231-254.
Routt, Kenneth R. and Paul B. Crawford, “A New and Fast Method for Solving
Large Numbers of Reservoir Simulation Equations,” SPE 4277, Third
Symposium on Numerical Simulation of Reservoir Performance, Houston, TX,
Jan. 10-12, 1973.
Tinney, W.F. and J.W. Walker (1967): “Direct Solutions of Sparse Network
Equations by Optimally Ordered Triangular Fractionalization,” Proceedings of
IEEE, v. 55, No. 11, (Nov.), pp. 1801-1809.
Towler, B.F. and J.E. Killough (1982): “Comparison of Preconditioners for the
Conjugate Gradient Method in Reservoir Simulation,” SPE 10490, presented
at the 1982 SPE Symposium in Reservoir Simulation, New Orleans, Jan. 31-
Feb. 3.
Vinsome, P.K. (1976): “Orthomin, an Iterative Method for Solving Sparse Sets
of Simultaneous Linear Equations,” SPE 5729, presented at the 1976 SPE
Symposium in Numerical Simulation of Reservoir Performance, Los Angeles,
CA, Feb. 19-20.
Wallis, J.R., K.H. Coats and R. Volina (1982): “An Interative Matrix Solution
Technique for Steamflood Simulation,” SPE 10491, presented at the 6th SPE
Symposium on Reservoir Simulation of the SPE of AIME, New Orleans, LA,
Jan. 31-Feb, 3.
Wallis, J.R., R.P. Kendall and T.E. Little (1985): “Constrained Residual
Acceleration of Conjugate Residual Methods,” SPE 13536, presented at the
1985 SPE Symposium on Reservoir Simulation, Dallas, TX, Feb. 10-13.
Watts, J.W. (1971): “An Iterative Matrix Solution Method Suitable for
Anisotropic Problems,” SPEJ, (March), pp. 47-51; Trans. AIME, 251.
Weinstein, H.G., H.L. Stone and T.V. Kwan (1969): “An Interative Procedure
for Solution of System of Parabolic and Elliptic Equations in Three
Dimensions,’ Ind. Eng. Chem. Fund., v. 8, pp. 281-287.
Woo, PT., S.J. Roberts, and F.G. Gustavson, “Application of Sparse Matrix
Techniques in Reservoir Simulation,” SPE 4544, paper presented at the 48th
Annual Meeting, Las Vegas, Nev., Oct. 1973.
Young, D.M. (1962): “The Numerical Solution of Elliptic and Parabolic Partial
Differential Equations,” Survey of Numerical Analysis, J.Todd (ed.),
McGraw-Hill Book Co., New York City.