Você está na página 1de 38

Full-Field Simulation: The Quest for Reality in Reservoir Modeling

John Killough
Brian Coats
Matthew Bennett

Landmark Graphics Corporation

ABSTRACT

The goal of reservoir simulation has always been to produce models that realistically
represent the flow of fluids in both the earth’s subsurface and in the surface facilities
during the production of a field. The ability to produce these results was originally
limited by not only computer hardware resources but also by our ability to fully
comprehend the mass of data that is required in any model of a complete field.
Fortunately for the reservoir engineer, computer hardware has led to significant
advances in our ability to model the fine detail that is now commonplace due to the
development of geological models with fine-scale attributes based on geostatistics. The
“unfortunate” consequence of these fine models has been the emergence of uncertainty
in any or all of the data used to describe the reservoir and well production histories.
Instead of a single history match, the engineer has now been forced to look at a series
of equally-likely reservoirs which describe the subsurface environment. Finally, in the
“tail-wagging-the-dog” scenario, the surface pipeline network has been introduced as
the ultimate test of both the software and the engineer. Surface facilities often
dominate the production from a reservoir but have been widely ignored due to the
complexity of the problem. The application of surface pipeline network models to
reservoir simulation is discussed ranging from simplistic manual coupling to complex
fully-implicit, surface-subsurface modeling. Finally, this paper discusses the evolution
of full-field modeling including the aspects of parallel and grid computing which have
allowed an improved solution of the modeling with uncertainty problem.

INTRODUCTION

Reservoir simulation from its early beginnings has possessed many noble goals.
Although still evolving, the goals can be summarized as follow: decrease manpower
costs, reduce turnaround time including the model building, increase accuracy and
realism, reduce technical expertise requirement, and finally, provide a real-time
simulation capability. To obtain these there are many requirements for the reservoir
simulator and overall modeling system. First of all, the generation of the grid must be
automatic. Regardless of the geological model, the progression of upscaling (reducing
the grid size) to the final simulation grid must be something over which the user has
control, but, if necessary, the steps can be done without user intervention. To build the
model, great amounts of data are required. This data must be readily accessible from a
database so that attributes can be generated for the geological/simulation model and
such that well data for the historical and predictive phases of the simulation can be
generated. The clear problem has been that models could always far exceed our ability
to simulate. The goal then becomes how do we select the best model for a full-field
simulation given the constraints of hardware, software, and personnel resources. Our
ultimate goal must be to simulate the entire reservoir including the surface facilities
while including enough detail such that the answers provide a reasonably accurate
representation of the world. To this must be added risk and uncertainty to properly
reflect our true understanding of the subsurface. Finally, scenario analysis must be
included so that the subsurface and surface are totally optimized. The sections below
provide a look at how full-field simulation has progressed to the current state-of-the-
industry. Although there exists a vast amount of literature on this subject, the
discussion has been limited to the experiences of the authors’ and is not meant to be
exhaustive. Many of the references quoted do, however, provide additional resources
of information.

EARLY ATTEMPTS AT FULL-FIELD MODELING

The earliest reservoir simulation models of 1960’s were limited to a few tens of cells
primarily due to computational limitations. As computing power grew in the 1970’s
the limitation of grid size progressed until hundred or even thousands of grid blocks
were available for the simulation in a reasonable amount of computer time.
Unfortunately, it was soon realized that these models were still far from adequate to
perform accurate simulations. To account for this inadequacy the concept of “pseudos”
or what has been more recently referred to as upscaling was introduced. The first
papers describing the concept of pseudo relative permeabilities and capillary pressures
were based on the concept of vertical equilibrium1 or fully-stratified flow2. This
concept was later enhanced by Jacks, et al.3 , when the upscaling concept of dynamic
pseudos was first introduced. As shown in Figure 1, the idea behind upscaling is
simple: decrease the number of grid blocks for a simulation by performing a reduced-
set fine-scale simulation and then calibrating the coarse model to the fine results.
Properties such as porosity and permeability are averaged whereas relative
permeabilities are often derived from a set of simulations which attempt to reproduce
the operating conditions of the full-field model. Figure 1 shows the typical early
approach to pseudos in which a three-dimensional reservoir was reduced to a 2-
dimensional areal model with a single layer. The often overlooked concept of velocity-
dependence of the derived pseudo relative permeabilities as first introduced by Jacks, et
al.3, provides a manner by which to calculate pseudos in the upscaled model under
varying conditions. This concept was later extended4 using viscous-gravity scaling
instead of simple velocity for the interpolation factor for the pseudos. In this manner,
the role of heterogeneity in determining the number of required pseudos was somewhat
reduced as shown by Fang, et al4.

Although there are many examples of full-field models during the 1960’s and 1970’s,
the example of the Jay-Little Escambia Creek Field5 in the state of Florida provides an
excellent study of full-field modeling with pseudo relative-permeabilties. In this
model, many months of effort were expended to derive field-scale relative
permeabilities and associated pseudo releative permeabilies to adequately predict
waterflood performance. The unique feature of this study was that the conclusions,

2
which were based on 2-dimensional areal models with pseudos, were verified with a 3-
dimensional model.

The lack of computing power in the 1970’s gave rise to additional forms of pseudo
relative permeabilities. A paper on the Empire Abo Field6 introduced two concepts:
directional dependence and coning correlations. The idea behind directional
dependence was simple. Pseudos generated for flow horizontally, such as VE pseudos,
take on a completely different form in the vertical direction direction if a three-
dimensional grid is used. The use of different pseudos in different directions in the
model led to a significant improvement in the vertical flow of gas in the model.
Similarly, the gas-oil ratios for a well with severe coning can only be properly
represented by well pseudos known as coning correlations. As shown in Figure 2,
coning correlations relate the well gas-oil ratio to the height of the oil column above the
perforations. As the oil column thins due to an advancing gas cap, the gas-oil ratio
rises exponentially.

The Prudhoe Bay field in Alaska7 presented some of the ultimate challenges for full-
field modeling with upscaling for the early three-dimensional reservoir simulations.
The geology of the model was complex with significant faulting and heterogeneity.
The tectonics involved in the trap formation were so complex that specialized
initialization had to be used to properly account for fluid movement which occurred
after oil migration. The presence of a huge gas cap and a significant aquifer led to
problems with both gas and water coning. Finally, as shown in Figure 3, the large areal
extent of more than 100 square miles and extensive facilities indicated that the surface
modeling would be equally important with the subsurface model of this field. To
account for the complex fluid movement in the field, specialized pseudo relative
permeabilities based on viscous–gravity ratio7 were used. Finally, a first attempt at
surface modeling was made through adding logic for well management, but it was
recognized that more sophisticated modeling would be required in the future to
properly predict field performance. It is interesting to note that the early simulation
models of Prudhoe, even with all of their assumptions, did correctly predict the point of
decline for the field more than ten years after initial production began.

Although these field cases did show that indeed pseudos can give useful and sometimes
accurate results, there are many other situations as discussed by Barker, et al.8, in which
the use of pseudos can lead to incorrect conclusions. The dilemma of ignoring
upscaling is shown in Figure 4 in which a 15,000 cell model is compared with a model
on the scale of the geological grid (about 200,000 cells). The difference in cumulative
oil production of the two cases is disconcerting – rising to as much as 15% in this case.
Perhaps the best that can be said is that upscaling has many problems but at this point
there appear to be few alternatives to reduce model sizes that do not make significant
assumptions on the physics or geology of the simulations.

EVOLUTION OF FULL-FIELD MODELING

If upscaling could not conquer the problem of full-field modeling then the only resort
was to increase model sizes. Full-field modeling has been allowed to progress
significantly over the past decades through the evolution of computing power. As

3
stated in Mores’ Law the power of computers has continued to double each eighteen
months during the past 20 or more years. In combination with this, the memory and
available mass storage have also increased at a phenomenal pace. The following
sections look at how computing has allowed simulations to grow to scales which would
have been unthinkable in the past.

The Impact of High-Performance Computing

Although there are many possible instances which can claim to be the birth of high
performance computing for reservoir simulation, most would agree that one of the
most significant events occurred with the introduction of the Cray 1S supercomputer
in 19769. This was quickly followed by the Control Data Corporation versions known
as the Star and Cyber series10. Although modest by today’s standards, both the Cray
and CDC supercomputers offered significant improvement in performance with
memories to 16 or more megabytes and computational speeds of several 10’s of
floating point operations per second. Through the use of advanced hardware known
as the Vector Pipeline performance was further improved. The vector pipeline simply
allowed the computer to take advantage of the functional parallelism that existed in
the floating point units of the computer so that instead of operating on a single set of
operands, the multiply unit, for example, would operate on 6 different sets of
operands. Each of these operands would be located in a different stage of the multiply
operation – the normalizaion, multiply, operand store, etc. Through keeping the
pipeline full the vector processor was able to deliver a result to memory every
machine cycle and thus achieve a “speedup” of a factor of 6 for the example. The war
between the Cray and CDC machines emphasized the importance of Amdahl’s Law as
graphed in Figure 5. The equation for this graph is:

Speedup = 1/(scalar+special/n)

where scalar is the fraction of the program which is scalar and special is the fraction of
computations performed on specialized hardware (one minus scalar). Simply stated
Amdahl’s law indicates that if there is a piece of specialized hardware in a computer,
it must be utilized a large fraction of the time to have a significant impact on overall
performance. The graph above shows a case for a factor of 32 performance
improvement for a specialized piece of hardware known as a parallel processor. The
graph indicates that more than 95% of the computations must be done on the fast
hardware (parallel) to approach even a factor of 20 improvement. Cray’s dominance
in this early high performance computing market was due primarily to the fact that
both of its conventional scalar hardware and vector hardware were extremely fast so
that large fractions of programs did not have to be “vectorized” to achieve a high level
of performance. Because of the CDC’s relatively low scalar performance, high
degrees of vectorization were required to achieve performance comparable to that of
the Cray. Some of the first reservoir simulation benchmarks on the Cray1 showed
scalar performance which was as much as 20 times the conventional processor at that
time for reservoir simulation – the IBM 3033. With the addition of vectorization these
performance numbers were further increased and hence high performance computing
became a reality. There were two aspects of the design of the Cray that made it
unique. First, relatively low levels of integrated circuits were used, that is, the chips

4
were commodity hardware. According to Seymore Cray, the unique feature of the
Cray was the ability to use these chips in a close environment through better cooling
or heat transfer. The chips of the Cray for memory and CPU required the electricity
equivalent of a moderately-sized town; however, the resultant heat was dissipated
through the use of Freon circulating in vertical shafts between which were mounted
copper plates with the chips. The other design factor unique to the Cray was the use
of vector registers which allowed low overhead for vector operations. That is, the
breakeven point for vector operations often required fewer than 10 operands to exceed
scalar performance.

The speed advantage of vector processing and especially the advantage of hardware
gather-scatter of the Cray Y-MP led to many innovations in algorithms and
computational techniques. Compositional simulation was revolutionized through the
use of “phase grouping” first introduced by Levesque and Killough11 and later
substantially enhanced by the work of Young12. These innovations led to simulations
which were greater than an order of magnitude more efficient than previous
compositional reservoir simulators. Compositional models were then able to surpass
the one hundred thousand cell barrier that had before only been possible for black-oil
models. From the first introduction of the Cray almost all full-field simulation models
became three-dimensional.

The Cray Y-MP (1982) also introduced for the first time an easy access of the
programmer to shared-memory parallel processing. “Microtasking” on the Cray Y-
MP allowed parallelization with simple commands in the FORTRAN code which
looked like comments to other computers/compilers. Results for microtasking showed
parallel efficiencies on 4 processors which were greater than 75% (a factor of 3
speedup on 4 processors)13,14. Nonetheless, the high cost of the Cray limited its
market to only the largest companies and to engineers with the highest priority
projects. This and the introduction of lower cost, yet faster computational alternatives
limited the life of the Cray, IBM, CDC, and other large-scale vector processors. There
is no doubt, however, that these machines had provided innovations in reservoir
simulation algorithms which would significantly impact high performance computing
and reservoir simulation in the future.

The introduction of the reduced instruction set computer (RISC) with “superscalar”
performance in 1990 spelled the end for the dominance of Cray in the high
performance reservoir simulation market. The custom design of the Cray implied high
costs. IBM and Silicon Graphics with the introduction of RISC processors provided
performance which often exceeded the Cray without having to resort to the specialized
code of vectorization - and the price was a small fraction of that of the Cray. Large-
scale simulations began to be performed routinely on the RISC machines. In addition
to the lower cost, the risk workstation added the concept of distributing computing to
the engineer’s desktop or deskside. It was no longer necessary for simulation
engineers’ jobs to wait in long queues for the company-wide supercomputer.

From the earliest reservoir simulation models of the 1960’s with only a few tens of
finite difference cells, reservoir simulation had progressed by several orders of
magnitude in the early 1990’s to hundreds of thousands of cells. Unfortunately, these

5
fine-scale models still were far from the scale of the data as shown in Table 1. This
chasm of scale was further exacerbated by the introduction of geocellular models with
geostatistically derived attributes. With these geostatistically-based models it was
possible to generate geological descriptions for the reservoir simulation with literally
tens of millions of finite difference cells. If we add to this the additional degree of
freedom of uncertainty of the reservoir data then the number and size of simulations
becomes unlimited. What this boils down to is the fact that the need for further
improvements in high performance computing will always exist for reservoir
simulation. As discussed below, parallel computing offers one approach to overcome
this problem of the explosion of information.

TABLE 1: The Evolution of Reservoir Simulation and Geocellular Model Sizes

Time Frame Large Reservoir Models (Grid Large Geologic Models


Blocks) (Cells)
Early 1980’s 320 Not Applicable
Late 1980’s- 20,000-100,000 100,000-500,000
Early 1990’s
Late 1990’s 100,000-300,000 20-50 Million
Present 100,000-10,000,000 50-100 Million

Parallel High Performance Computing

The attraction of parallel computing to achieve high computing efficiencies has existed
for decades. What was generally lacking was the ability to easily port software to the
parallel environment. As discussed above, the introduction of Cray’s Microtasking
alleviated this difficulty somewhat. Unfortunately, because of the diverse nature of
reservoir simulation, simple parallelization schemes are not viable for a usable,
general-purpose, approach to reservoir modeling. Profiles of computing work load for
a typical model often show tens, if not hundreds, of subroutines which are involved in a
substantial portion of the calculations. Because of this, major reprogramming of
reservoir simulation models is required to achieve high parallel efficiencies. As
pointed out in reference 14, there are numerous obstacles to parallelization for reservoir
simulation:

• Recursive nature of the linear equation solver


• Load imbalance caused by reservoir heterogeneity and/or physics
• Diverse data structure of well and facility management routines

Several papers in the literature15-24 discuss techniques which have been used to bring
about efficient parallel reservoir simulations. Each of these addresses the solutions to
parallelization and in particular the obstacles mentioned above in various ways. The
simulator described below uses local grid refinement for parallelization (Nolen, et
al.25). With local grid refinement the same simulation program can be used to perform
simulations either serially on a single processor or in parallel on multiple processors

6
simply through data manipulation. In addition, a great deal of flexibility for domain
decomposition exists which can lead to enhanced load balance for parallel simulations.

Parallelization using local grid refinement involves assigning each grid to a processor.
The same processor can be assigned to many grids or in the limit, each grid can be
assigned to a separate processor. Variation of processor assignment and the grid
refinement can dramatically affect parallel performance. Different preconditioners26-28
for the parallel linear equation solvers can also have a dramatic effect on parallel
efficiency. Finally, the flexibility of local grid refinement and processor assignment
can be used to achieve improved parallel efficiency.

The Evolution of the PC and Reservoir Simulation

For over a decade, PC-based reservoir simulation has been utilized for not only black-
oil models but also for full-physics compositional simulations.29-32 Unfortunately, the
early implementations often forced the reservoir engineer to use smaller models which
were of limited applicability. In the early 1990’s, accelerator add-ons helped PC’s to
achieve performance approaching that of Unix workstations. However, memory
limitations of the PC’s at that time also hampered general applicability of such
processors.32 For example, Young, et al.33, showed that because of the 64 MByte
memory access limit of the i860 chip, simulations were limited to 25,000 cells for
black-oil models.

Most of the early work was based on the Intel 80386 chip. Although relatively efficient
for integer operations, this chip lacked a significant floating-point capability. With the
emergence of the 80486 in 1989, however, this and other limitations were significantly
reduced. Indeed, the 80486 can be thought of as a key factor in the “emergence of the
PC platform as we know it today.”34 The inclusion of a Level 1 (L1) cache in the
processor meant that fewer memory (RAM) accesses were required and that access to
the cache was significantly faster. The increased chip density of five times that of the
80386 was dedicated almost entirely to the Floating Point Unit (FPU) and to the L1
cache. The amount of RAM which could be accessed was increased to 4 Gigabytes as
well. With a clock speed of up to 50 MHz, the 80486 was capable of performing
several million floating point operations per second. For these reasons, the 80486 now
offered performance which was reasonable for scientific and engineering applications
such as reservoir simulation. Nonetheless, the performance was still significantly
behind that of RISC/Unix workstations such as the IBM RS/6000 and SGI R4400.

Introduction of the Pentium by Intel in 1993 brought the PC much closer to the Unix
competition for scientific computing, since the Pentium now offered superscalar
architecture. In addition to enhanced floating point operations with dual floating point
pipelines, the Pentium used branch prediction technology and increased the speed of
memory accesses by using a 64-bit data bus. Pipeline burst mode for reading from and
writing to memory was further enhanced by doubling the memory bus clock to 66
MHz. The division of the L1 cache into separate data and instruction caches (the
“Harvard Architecture”) also helped performance significantly. Further enhancements
were made to the Pentium in 1996 with the introduction of the Pentium Pro 200 MHz
chip. This chip added speed with an integrated 256K Level 2 (L2) cache in addition to

7
a 16K L1 cache, both running at the same clock rate as the processor. Because of the
integrated caches, the Pentium Pro can sometimes exhibit better floating point
performance than the follow-on Pentium II chips, which only have a half-speed L2
cache. The Pentium Pro 200 MHz chip provided the basis for one of the workstation
clusters used in this paper.

As discussed above, parallel reservoir simulation has been the subject of investigations
since the mid-1980’s. Because of the popularity of shared-memory, multi-processor
mainframes, several authors reported on parallel simulation with these extremely
expensive, but computationally efficient, devices. Enhanced performance (speedup)
approaching the number of processors was reported by several authors.21-23 At the
same time that these mainframe investigations were being performed, distributed-
memory devices were also being investigated.38-41 Driven by the lure of inexpensive,
commodity-based processors, the early distributed-memory machines used PC’s as
nodes in the parallel computing device. In the mid-1980’s, the 80386-based Intel
IPSC-1 computer became one of the more popular platforms for investigation of
parallel computing. The requirement for improved, topology-free communications led
to the Intel IPSC-2, also based on the 80386, but with the attractive feature of
“wormhole-like” technology which reduced inter-processor latency to a few hundred
microseconds and enhanced communication bandwidths to 2-3 MBytes per second.
Unfortunately, even with these enhancements, the performance and memory limitations
of the processor nodes limited the applicability of these devices, since few simulations
fell into the category where hundreds of processors could be exploited. The addition of
vector processor capabilities reduced the performance issue, but only at the expense of
significant code conversion to take advantage of the vector processing hardware.18

From the late 1980’s until recently, research in parallel reservoir simulation has
concentrated on RISC/Unix parallel computers which offered high computational
efficiencies and large memory capicities.13,14,20,21,22 The development of a Windows
NT version of the Message Passing Interface (MPI) in 1997 and 1998 made it possible
to include the PC as part of a cluster of workstations for performing parallel
simulations. The popularity of the Linux operating system on PC’s led to futher
enhancements in computational power for clusters of inexpensive computers. The
following sections discuss the application of a cluster of Windows 2000TM or Red
HatTM Linux workstations to parallel reservoir simulation.

Extensions to Clusters of PC’s

As mentioned above parallel computing has taken many forms over the past two
decades. Mainframe shared-memory devices evolved to RISC-based systems with both
distributed and shared memories. Early versions of massively-parallel systems
attempted to use commodity chips such as the Intel 386 and 486 as processing
elements. Although low in cost, these early Intel CPU’s lacked the performance and
support for large memory sizes required for effective use in parallel simulation. They
were also extremely expensive for the hardware and performance that was delivered.
However, recent architectural advances have overcome these obstacles. With clock
speeds exceeding 3000 Megahertz and support for memory sizes up to several

8
gigabytes, the new Intel and AMD-based platforms have exhibited the ability to handle
substantial simulations in serial mode. Using these commodity CPU’s in tandem to
perform parallel simulation further exploits their low cost. The problem then was how
to apply a parallel cluster of Windows-based workstations to reservoir simulation. To
better understand the performance of PC clusters, a cluster of 64 Pentium 4 2.4
gigahertz CPU’s with 2 gigbytes of memory per node was utilized to solve black-oil
and compositional simulation problems with up to millions of cells for each model.
Computational performance comparable to more sophisticated systems was obtained.
Earlier work42 focused on reducing the overhead of message passing and interprocessor
communication, increasing the inter-node bandwidth, and predicting the ultimate effect
of these two factors on the scalability of the system. Several communication schemes
were investigated, including an NT version of the Message Passing Interface (MPI)
with off-the-shelf Ethernet hardware as well as a custom high-performance messaging
layer constructed over the Virtual Interface Architecture (VIA) and a high-speed
System Area Network (SAN). This earlier work predicted that indeed, with a high-
bandwidth, low-latency network, clusters could scale to more the 32 nodes. As shown
in the results below, this prediction was actually somewhat conservative.

TABLE 2: Sequential and SMP Head-to-Head Comparisons for the North Sea Model
(elapsed times in minutes)

Number Compaq Dell 2450 Dell 330 Sun IBM 43P- IBM 43P-
of 450 MHz 733 MHz Serial Ultra 60 260 270
CPU’s TCP+ 1.5 GHz 360 260 MHz 375 MHz
Giganet MHz
1 296 192 121 314 215 127
2 181
4 106
8 64

The sequential results in Table 2 show amazing performance for a Pentium IV, 1.5 GHz
PC. The speed on this computer now exceeds that of one of the more powerful Unix
workstations – the IBM 43P-270 (375 MHz). This was achieved with no changes in the
simulator software – the same executable was used for all of the PC tests. With better
optimization for the Pentium IV even greater speeds seem highly likely. The 733 MHz
Pentium III Dell performs at about two-thirds of the Pentium IV and comparable to the
IBM 43P-260. Finally, Table 2 shows serial performance for the Compaq 450 MHz
cluster to be comparable to the Sun Ultra 60. The parallel performance is good up to 8
processors with about a factor of 5 speedup. The lack of scaling from 1 to 2 processors
is primarily due to the use of a serial non-decomposed grid solver for the 1 processor
case, and 24 decomposed and associated solver for the 2 processor case. Scaling above
2 processors is good, but some message passing overhead is observed in the elapsed
time in going from 4 to 8 processors.

Parallel performance on Linux machines appears to show even better results than had
been predicted from early experiments. The use of the low-latency, high-bandwidth
Myrinet switch led to the outstanding scale-up for a one-million cell compositional

9
model shown in Figures 6 and 7. These results show that indeed, PC clusters can now
be competitive with the largest and fastest parallel Unix systems. As shown in Figure
7, the scaling to 64 processors was about 35 times the scalar performance. The speed
of these simulations is more than one-half of the speed demonstrated by the fastest
Unix machine – the IBM P690 Regatta. These results are for the Pentium 4, 2.4
gigahertz cluster mentioned above.

The availability of reservoir simulation on inexpensive commodity hardware has


provided the reservoir engineer with highly-available tools to make simulation with
minimal upscaling commonplace. The following sections describe typical full-field
applications which have been performed with parallel reservoir simulation.

THE APPLICATION OF PARALLEL COMPUTING

The power and efficiency of parallel computing has led to the ability to not only
investigate large models, but also to include complex physics and grids for these
models. An example application provides a good platform for the discussion of these
new capabilities.

Simulation at the Scale of the Geological Model with Locally Refined Grids

A recent work by Jutila, et al.43, investigated the simulation of a retrograde-condensate


field with a grid at the geological level with locally-refined grids around the producing
wells. The field consists of two main areas; the subsea gathering center producing from
two areas and the platform area producing from an addition three main layers. The
study was undertaken to better understand the loss of productivity which had occurred
in the wells in the platform area with up to a 40% reduction in production during the
first few months of well life. It was suspected that the production loss was primarily
due to the build-up of a retrograde condensate bank or “halo” around the producers
with the associated reduction in gas relative permeability. The idea behind the study
was to improve the field productivity by investigating four different well types: 1) main
area with good permeability, three layers, no gas-water contact 2) western well: low
permeability, one layer, distant from the platform 3) northern well: two main layers,
low permeability, overlaying a high pressure shale 4) eastern well: one better layer, one
lower quality layer with gas-water contact.

The static reservoir model was based on a gOcad geological dataset of this combination
structural, stratigraphic trap. Two cross-sections of the geological model are shown in
Figure 8. These cross sections show the minor amount of upscaling which was
performed to reduce the maximum number of simulation layers to 36 with further
coarsening in the aquifer area. A total of 46 geological layers were used for the
simulation model. An areal grid of 254x168 cells was used as shown in Figure 9. The
overall number of cells in the model was 979,000 with about 320,000 active cells.
Since near-well physics was a dominant mechanism in the reservoir production 5- and
12-component composition fluid characterizations were used along with grid
refinement. Local grid refinement was first introduced in the literature by Mrosovsky
and Ridings44 in the mid-1970’s. The treatment of local grid refinement was enhanced
by several authors45-48 during the next decade. The near-well refinements for the North

10
Sea retrograde-condensate study are shown in figures 10-12. Figure 10 shows the
location of the well refinements throughout the model. Figures 11 and 12 indicate the
7x7 grid around the well. These figures also show the extreme importance in the local
grid in capturing the drop out of condensate near the wells. The purple and brown
colorations of the grid indicate the increased condensate saturation. Figure 12 also
shows the variation of condensate saturation with depth (layer) due to varying
permeabilities and associated variation in well pressure drop.

Now that upscaling and near-well physics could be more accurately modeled, the
requirement of including the surface facilities became the next step in the move toward
more realistic simulations.

IMPACT OF COUPLED SURFACE FACILITIES SIMULATION

The recent work of Brian Coats, et al.49, summarizes the evolution of coupled surface
facility-reservoir simulation which spans several decades. In typical reservoir models,
flow in the reservoir and flow between the reservoir and wellbores is decoupled. The
decoupling can be either at the bottomhole or at the wellheads, from flow through the
remainder of the production and injection facilities through specification of pressure
and/or rate constraints for each well. If individual well rates and pressures are known
from production history, then the decoupled reservoir/well model is sufficient to match
historic reservoir behavior by specifying and matching the observed boundary
conditions as a function of time. However, when used in predictive mode for reservoirs
with gathering and distribution networks, the proper decoupled well boundary
conditions are in general variable in time and are dependent on reservoir behavior,
equipment performance, production strategy, hydraulics relationships, and pressure,
rate, and source composition constraints that may be applied within the surface
network. When production is controlled in the surface facilities, it is in general
necessary, or at least desirable, to include the facilities in a full field model to predict
how the otherwise specified boundary conditions will vary in time.

The simplest example of this decoupling is the decoupling of the reservoir model from
facilities at bottomhole well locations, requiring specification of bottomhole pressure
and/or rate constraints for each well. If the system is truly constrained by well
tubinghead pressures and if the composition is varying, then the proper bottomhole
pressure constraints are variable in time and are impossible to predict without
knowledge of the tubinghead pressure constraint, the hydraulics relationship in the well
tubing, and the composition of the produced fluids as a function of time. Therefore, the
decoupling is dangerous, as bottomhole pressure constraints may be specified which
will allow wells to flow, when in fact they cannot flow in the true system (the specified
bottomhole pressure cannot be achieved). Most reservoir models can handle this
specific case by including the tubing in the well model implicitly, but the same concept
applies, for example, to a group of wells flowing against a common manifold pressure
constraint.

As this is an obvious limitation of decoupled reservoir simulation, many authors have


presented methods for simultaneous solution of the reservoir and facility equations.

11
Most methods are based on modification of a reservoir simulator to iteratively converge
separate solutions of the well and facility domains (sometimes referred to as an
equilibration loop) prior to a conventional solution of the combined reservoir and well
domains. We refer to all these methods as loosely-coupled or closely-bound because at
no point are all domains solved simultaneously. The methods differ according to the
frequency of equilibration and the definition of final timestep convergence. If
equilibration is performed only on the first Newton iteration of each timestep, then the
surface model is coupled at the timestep level. If it is performed every Newton
iteration, the coupling is at the iteration level. Falling in between, a partial iterative
coupling performs the equilibration for some number of iterations. The frequency of
equilibration may also be controllable in time, with a conventional decoupled method
used in between equilibrated timesteps. The coupling method is further classified as
explicit, partially implicit, or implicit, with respect to the facility solution, based on the
final timestep convergence criteria and coupling level. If only convergence of the
reservoir equations is required, the method is explicit if coupled at the timestep level,
partially implicit if coupled at the iteration level. If convergence of the reservoir
equations and the well/facility boundary conditions is required, then the coupling is
said to be implicit because it yields an effectively implicit facility solution, regardless
of the coupling level. The overall level of implicitness of the model depends on the
implicitness of the coupling and on the implicitness of the equations within each
domain.

Early models, beginning in 1971 with the work of Dempsey50 for gas/water systems,
used timestep level explicit couplings of the well and surface domains, including
simple surface models integrated into the reservoir code. Extensions to black oil
systems were presented by Startzman et. Al51 and Emanuel and Ranney52. More
recently, Litvak and Darlow53 presented a coupled model with the network model
integrated within commercial black oil and compositional simulators. This was the first
reported coupled compositional model. They described both fully coupled and partially
coupled methods, but only the partially coupled method is currently implemented. The
implementation by default equilibrates for two iterations in each timestep, and timestep
convergence is based on the reservoir equations alone, resulting in a partially implicit
method. Extensions of the model to provide integration with production optimization
strategies applied to BP’s Prudhoe Bay field54, tabular representation of compositional
dependence for improved efficiency55, automatic history matching56, and more
advanced production optimization strategies57,58 have been presented.

Schiozer and Aziz59 showed that two acceleration techniques applied to the iteratively
coupled implicit method can improve its efficiency. The first technique investigated
was a preconditioner applied at the beginning of the timestep. It provided estimates of
the new time level reservoir boundary conditions for use in the well/surface
equilibration loop on the first Newton iteration. The second technique extended the
well subdomains to include part of the reservoir grid surrounding each well, such that
the boundary between the reservoir and the well domains is moved from the sandface
out into the reservoir, creating overlap of the well domains with the reservoir. The idea
is to move the boundary to a set of surfaces where conditions are changing less rapidly
while also including parts of the reservoir grid in the well/surface equilibration loop.
Boundary conditions at the well/facility domain interfaces were represented as

12
linearizations. A fully coupled method was also investigated, but it was concluded that
it was inefficient for complex facilities.

Byer et. al.60,61 and Byer62 extended Schiozer’s work to a fully coupled method, in
which the equations for all domains are solved simultaneously at the end of each
simulator Newton iteration, while retaining the extended definition (into the reservoir)
of the well subdomains. Also provided for were domain decomposition and parallel
treatment within the reservoir and the facilities, and an adaptive implicit reservoir
formulation. Rather than using the explicit preconditioning method of Schiozer, a
coarse grid solution was performed prior to equilibration of the well and facilities
domains within each Newton iteration for improved estimates of reservoir boundary
conditions. It was shown that some optimal extent of the well subdomains into the
reservoir minimized CPU time for a given case. The coarse grid solve and well/surface
equilibration was collectively referred to as ‘preconditioning’. It was claimed that the
fully coupled method could be made efficient by including this preconditioning, but it
is difficult to determine the overall utility of the method from results presented, since
example timings were given relative to the case with no preconditioning. An
equivalent analysis for a conventional model with no facilities would be to remove the
well model, and to compare results obtained using some other preconditioning method
to the case with no well model.

Several models have been presented in which advanced surface facility models
developed for production engineering applications (standalone facilities modeling, or
facilities modeling integrated with simplified reservoir models) are used in place of
simpler network models integrated into the reservoir code. As the effort required for
development of these advanced network models is comparable to that of reservoir
simulators coupling requires very little work compared to the effort invested in each.
Some work is usually required to guarantee consistency in the treatment of phase
behavior in the two models, especially for compositional systems. Hepguler et. al.63
and Tingas et. al.64 presented timestep level implicit couplings of (black oil) reservoir
and network simulators. Tingas et. al. allowed for well/facility coupling at either
bottomhole or tubinghead locations, and also reported that partially coupled models
were under development at Amoco as early as the 1960’s. Trick65 extended Hepguler’s
work, using a different network model, to the iteratively coupled implicit method,
reporting up to a thirty-fold improvement in efficiency.

Application of Coupled Surface Facilities

The techniques described above can be briefly summarized as the following three
separate approaches to the modeling of coupled surface facilities/reservoir simulation:

• Loosely coupled – data from the simulation model is explicitly passed to a


full-featured surface network model
• Closely bound – surface facility simulation is performed within the
reservoir simulation package, but coupling to the simulator is explicit at
some level
• Full-implicit – the surface facility equations are solved as part of the
linear/non-linear iteration scheme.

13
Currently, explicit coupling techniques are widely used to include surface facilities
calculations in the simulation. Unfortunately, this often leads to significant instabilities
and/or extremely small timesteps. The degree of instability also depends on the level of
the coupling. Coupling at the tubinghead as opposed to the sandface appears to lead to
greater instabilities and/or difficulties in convergence. Similarly, explicit coupling at
the Newton iteration level appears to be considerably more stable than coupling only at
a timestep or multiple timestep level. Clearly, a more implicit technique would be quite
desirable to alleviate the problems. The cost, of course, is the greater computation time
required for the fully-implicit linear and non-linear solutions. This increased cost can
often be overcome by the improved stability gain so that overall CPU time for the fully-
implicit solution is actually less than the competing techniques. This is described in
reference 45 in detail for several examples.

An example of a loosely coupled technique is shown in Figure 13. In this example


using a popular surface network model (Petroleum Experts’ GAP) a simulator is tied to
the surface network at the sandface of the wells. The link utilizes the “GAP Open
Server” tied to the reservoir simulation model via a “Master Communication Program”.
The “Master Communication Program” could be an Excel spreadsheet with VBA
interface, but in this instance a high-level language was used to create a custom
application. The flow control starts by launching three simultaneous jobs – the
reservoir simulation, the master control program, and the GAP model. The simulation
then begins a timestep. At the end of the first Newton iteration, well IPR’s, WOR’s,
GOR’s, and bottomhole pressures are sent to the master control program via an ASCII
file. Since the amount of this data is small, the time spent in the I/O of data to the
Master Control Program and from the GAP Open Server is insignificant for all but the
smallest reservoir simulations. Once the Master Control Program detects that
information is available, the GAP model is loaded with the data and a simulation of the
static surface network is performed. At this point constraints and optimization can be
included in the network solution. The GAP solution is then passed back to the reservoir
simulator as well rate constraints and the next Newton iteration begins. Generally, the
surface network calculations are limited to the first three Newton iterations to achieve
convergence of bottomhole pressures to within a few psi.

The closely-bound approach differs little from the loosely-coupled technique as shown
in Figure 14. The main difference lies in the fact that the surface network and reservoir
simulator are in the same executable and data is simply passed from one control section
of the program to the next through memory. Otherwise, for the explicit approach, the
work flow is quite similar to that described above. One significant difference, however,
is the fact that in the simulator used in this investigation, the surface network and
tubing hydraulics calculations were performed using a fully-compositional equation of
state for fluid properties. This more accurate treatment produces densities and
compressibilities that are more consistent with the reservoir simulation.

A Fully-Implicit Solution Method for Coupled Surface Facilities

The technique described by Brian Coats, et al.49, gives one approach to the solution of
the fully-implicit coupled reservoir simulation/surface facilities problem. At the

14
beginning of each simulator Newton iteration, mobilities and densities are computed for
each reservoir cell either containing a perforation or being treated implicitly in the
reservoir. These are the variables, in addition to the reservoir grid cell pressures and
compositions, which couple the network equations to the reservoir equations. The
network equations are then solved by Newton iteration, holding fixed the current iterate
values of the reservoir variables. The form of the decoupled primary system of
network equations is as follows:
ª A ff A fp º ªδx f º ªR f º
«A » « » = −« » (2)
¬ pf App ¼ ¬δx p ¼ ¬Rp ¼
where f and p denote facility and perforation (facility designates network equations
other than the perforation rate equations), R are the residuals, and A (the Jacobian)
contains the derivatives of the residuals with respect to the variables x.

At the start of each network iteration, the network equations are checked to determine
which are active. For connections representing adjustable devices, the limiting
constraint for a given iteration is taken as the constraint which is most violated.
Additional constraint checking is required to detect and prevent over-constrained
systems due to combinations of rate and pressure constraints. When an over-
constrained system is detected, constraints are eliminated using estimates of which
constraints are limiting. These estimates must become accurate as convergence of the
applied constraints is iteratively approached.

At convergence of the network equations, the fully coupled system of network and
reservoir equations is assembled, which is represented by
ª A ff A fp º ªδx f º ªR f º
« »« »
« Apf App Apr » «δx p » = − «« R p »» (3)
« Arp Arr »¼ «¬ δxr »¼ «¬ Rr »¼
¬
where r denotes reservoir, and where the terms appearing in Eq. (3) are the values from
the network domain solution (the network domain was decoupled from the global
system by assuming δxr to be zero). The reservoir cell mass and pressure coefficients
of the perforation rate equations, Apr, are added to the perforation rate equations. These
include coefficients due to the implicit treatment of perforated reservoir cell mobilities.
The reservoir conservation equation residuals (Rr) and coefficients (Arr, Arp) are then
built. Arr are the coefficients due to intercell flow and accumulation. Arp are the
coefficients of the perforation rate terms (Qip), and for the rows of component
conservation equations for a cell, these are identity submatrices for (columns
corresponding to) each perforation contained in the cell. Generation terms are currently
taken as zero. The values of the perforation rate terms in the residuals are provided by
the network domain solution.

At this point, the global system of equations has been built and is ready for elimination
of the secondary reservoir equations and variables. For reservoir cells using the
IMPES reservoir formulation, the conservation equations (the secondary equations) are
used to eliminate the mass coefficients of the cell volume constraint creating the
pressure equation, and are also used to eliminate the cell mass coefficients of the

15
perforation rate equations. For fully-implicit reservoir cells, the volume constraint (the
secondary equation) is used to eliminate the mass coefficients of the last component
(water) in the reservoir conservation equations and, for cells containing perforations, in
the perforation rate equations. These linearized and reduced reservoir, modified
perforation, and other network equations can also be represented by Eq. (3), with the
reduction having modified the dimensions and values of Rr, Rp, Arr, Arp, and Apr, and the
values of App. The equations are solved at the end of the simulator Newton iteration
with an unstructured solver. The reservoir domain (Rr, Arr, xr) may be divided into
subdomains, while the network system is treated as a single domain. Because of the
form of the network equations, standard iterative solution techniques cannot be applied
to the network domain. We currently apply a direct solve to the network domain using
sparse elimination with partial pivoting.

Advantages of the Coupling Schemes

Each of the coupling schemes has specific advantages over the other. The loosely-
coupled scheme can take full advantage of the rigor of third-party software for surface
facility modeling. This includes more comprehensive treatment of artificial lift
optimization and automated optimization of surface flow rates to maximize an
objective function such as oil production or total revenue. The closely-bound technique
described above provides a capability of rigorous treatment of the compositional
phenomena. Because the closely-bound method is part of a single software package,
input data is likely to be more consistent between the simulation model and the surface
network. Finally, the fully-implicit scheme provides the rigor of the closely-bound
technique for treatment of the compositional properties and the stability of a fully-
implicit method with associated larger timesteps.

Multi-reservoir Coupled Surface/Subsurface Modeling

One great advantage of inclusion of both surface and subsurface models is the ability to
understand interactions that occur from multiple reservoirs which are coupled at the
surface. An example of this is taken from a study performed in Algeria. A schematic
is shown in Figure 15 indicating the overall layout of the surface and subsurface for the
study. Four separate reservoirs were involved, two of which were in close
communication through a common aquifer. Figure 15 also captures the surface
network as well as the topography obtained through LIDAR data. As shown in the
figure, the four reservoirs share in a common surface network that leads to a single gas
plant and a common water injection facility. Each of the reservoir models was modeled
in a compositional fashion with up to 30 components being used to maintain accuracy
of the equation of state and viscosity correlations when compared to a stand-alone
models which only utilized 20 components each. The combination of the heaviest 5
components was varied from reservoir to reservoir to capture the exact phase behavior
observed in the original models. Questions answered by the study included how to
make the most efficient use of the existing gas and water injection plans to maximize
recovery, which portions of each field should be miscibly flooded, how to mix and
match injection gas among the fields, and finally should gas be purchased to expand the
WAG floods. This models included parallelization of both the reservoir simulation and
the surface network using the closely-bound approach described above. In an

16
interesting comparison of the closely-bound (SIM/SPN) and loosely-coupled
(SIM/GAP) techniques, this Algerian model was used. The comparison showed similar
pressure drops in the facilities and well bottomhole pressures. Compute times for the
loosely-coupled method depended strongly on the level of optimization which was
required for the surface facilities.

Beyond History Matching

Inclusion of coupled surface facilities in the predictive reservoir simulation model


presents a significant dilemma for the reservoir engineer. After a considerable effort
has been spent on the matching of production history for the subsurface model, the
problem then becomes how to perform predictions of future field performance under
the constraints imposed by the surface facilities. It is clear that unless some effort is
spent to match observed pressure drops and flow rates in the actual production facilities
and tubing strings, there is little hope to expect realistic predictions from the simulation
model. Litvak, et al.56, present a scheme for accomplishing this difficult task using an
automatic tuning procedure. The idea is relatively simple in concept, but somewhat
complex to implement. For a given period at the end of the production history,
historical pressure drops and flow rates are saved and compared with simulated values.
For each production well, the well index is adjusted to minimize the error between
observed and calculated bottomhole pressures for each historical well. The scheme
then proceeds to the minimization of the observed surface network errors at each level
of the network by adjusting flowline roughness, length and diameter. Finally the
calculation proceeds to the last stage in which the tubinghead pressures are matched by
adjusting in sequence the tubing string roughness coefficient followed by the tubing
string length and finally the tubing diameter. Since the error function is highly non-
linear in the parameters, many passes of the optimization are required to converge. An
example of this is shown in Figure 16 in which the effect of tuning is compared with no
tuning for a field example. As shown in the figure, the differences in observed
manifold pressures at one point in the network are substantially reduced using this form
of tuning scheme. Once the tuning has been accomplished, it is now possible to
perform predictive simulations using the surface network model with a great deal more
confidence. As additional historical data is acquired, the tuning procedure can be rerun
at the advanced time to further improve confidence levels for the predictions.

The works of Litvak, et al.56, and Wang, Litvak, and Aziz57 carry the concept of
coupled surface facilities to the next level to include production optimization. Because
of the the number of producing wells in the Prudhoe Bay Field, Litvak, et al.58,
introduced the concept of the well proxy model to allow optimization to proceed
efficiently. Using this technique, field production is optimized regularly in the control
room of the field in real time.

The Role of Uncertainty in Full-Field Modeling – The Grid Computing Solution

The role of uncertainty in full-field modeling has become increasingly recognized over
the past decade. It is clear that a great deal of uncertainty exists in not only the earth
model, but also in the fluid and rock properties. In addition, for future predictions there
are many possible production and/or recovery scenarios including surface facility

17
design which must be included in the analysis. Gorell and Bassett66 and Narayanan,
Cullick, and Bennett67 present different aspects of this problem. Models must be built
to access uncertainty and to understand project risk. The problem, of course, is how to
accomplish this formidable task without sacrificing model accuracy. That is, the
assessment of geological uncertainty alone can lead to an almost infinite number of
simulations which must be performed. If we add the problem of production
optimization through the modeling of different scenarios, then the task appears to be
impossible. A great deal of research has been undertaken to reduce the problem to a
manageable size. The first approach might be to use extremely coarse models or even
material balance models so that large numbers of simulations can be performed to
cover the parameter space. Unfortunately, this causes the uncertainty problem to
digress to the inaccuracies discussed in the first sections of this paper. Reducing the
problem via grid coarsening leads to systematic errors which may lead us down
incorrect paths. At first glance, response surface techniques appear promising as a way
to understand the parameter space. Unfortunately, the assumptions of response
surfaces are so inherently limiting that the results are often questionable.

Another approach is to limit the number of simulations such that only a few key
parameters and/or scenarios are covered. The problem, of course, is which parameters
are key and which should be ignored. The answer appears that both approaches must
be undertaken – perform as many coarse models as required to understand the
parameter space and then to link these to finer grid models which capture the essence of
the coarse model investigations.

A new concept in computing may provide an excellent possibility of limiting the grid
coarsening required for the first step of the uncertainty problem. Grid computing
provides the capacity to perform literally thousands of simulations at a moderate scale
within a few hours of elapsed time. The concept of grid computing is not new. It
began as simply a way to harness unused CPU cycles from distributed computers in a
network in the early 1970’s. The introduction of the internet and the growth of network
bandwidth led to the inevitable idea of utilizing huge numbers of computers during idle
times to accomplish unheard-of tasks. The most successful and popular of the grid
projects is the SETI@home project introduced by United Devices ( www.ud.com ).
This grid encompasses over 2 million home and office computers in the search for
extraterrestrial intelligence from radio telescopes.

The concept of grid computing is quite simple. Given a set of computers which can
communicate over a common network (or simply the internet) install an agent on each
machine. The grid agent then allows communications to a master node (or meta
processor) which determines the workload of all computers in the grid. The job
submission then becomes a matter of combining all of the relevant files – data, license,
and executable. These files are then encrypted and sent to a machine which has
indicated it can accept work. A different set of data is sent to each machine so that the
entire set of desired parameters is investigated. To ensure that all required jobs are run,
there is redundancy in the system so that a single job may be submitted several times, if
required. The redundancy is necessary since it is likely that some machines may fail or
simply be disconnected from the network during the course of the submitted job. The
jobs are run at extremely low priority so that there is little or no perceived interference

18
should the owner of the machine wish to perform other work. As the simulations are
performed on the grid, the progress of the overall job can be monitored using a console
on the master node. Finally, as each job completes, the required results are passed back
to the submitting node to be collated and processed for the final report. This report can
simply be a plot of oil production version time or more likely the economics of the
particular scenario. Since all of the calculations including the economics are complete
independent, grid computing provides the optimal answer for assessing uncertainty and
associated risk. Figure 17 depicts a typical grid in which the power of many different
types of computers is harnessed in the solution of the overall problem. Figure 18 shows
a flow chart of a grid in which servers schedule tasks submitted by the user over the
console and the agents pick up the work units as cycles become available. A database
keeps track of the application tasks and data, the device statistics, and other required
information.

To test the validity of grid computing for assessing uncertainty with reservoir
simulation, a Monte Carlo approach was taken. In this method, normal distributions of
permeabilities and porosities were assumed with certain minimum and maximum cutoff
values. The distributions then supplied multipliers for the original data. One thousand
cases were submitted for a 125,000 cell model shown in Figure 19. Simulation times
for this model varied from 30 minutes to 2 hours depending on the difficulty of the
simulation due to the property multipliers. The complete set of simulations was run
overnight on a 1000-node grid and results were transferred to the master node after
each of the cases was completed. With this example, more than 1000 hours of CPU
time were simulated in the course of about 3 hours. On a single machine the
simulations would have required more than one month. It is interesting to look at the
details of the grid which was used for the study. As shown in Figure 20, the grid
existed in scattered locations throughout the United States. Each of these locations
represents a retail PC store with about 30 nodes available for the execution for a total of
more than 8,000 nodes. The main problem encountered was how to distribute the data
to the nodes.

With an ultimate goal of more than 10,000 cases each requiring 8 hours of CPU time,
the data distribution to thousands of nodes can be a significant bottleneck. That is, the
time to distribute the data may far exceed the time for the simulations if the distribution
is done in a naïve fashion. The solution to this problem has actually been solved by the
developers of peer to peer file sharing frequently used by music sharing services such
as Napster and Limewire ( www.limewire.com ). Figure 21 shows an example of
Limewire’s Gnutella network. In the Gnutella network, each computer is able to
communicate with many computers simultaneously. In this fashion, if data required by
one computer is available locally, much of the bottleneck of the serial master to node
transfer is eliminated. In Gnutella interpretation, the “time to live” is minimized. To
accomplish this, data is simply sent to one computer at a given site. This computer is
then able to “broadcast” to the additional local nodes all required data or executables.
The grid computing standard Globus ( www.globus.org ) is adding a protocol similar to
Gnutella in the Globus-2 standard. This enhancement to gridftp should improve
considerably the communications overhead for the simulation of uncertainty. To
further improve overhead, we have added automated perturbations from a common set
of persistent data. In this fashion, a small program at each node is able to modify large

19
amounts of data with small files which are unique to each task in the Monte Carlo
simulation. Figure 22 shows the power of even a small grid for solving substantial
problems. In this example, 240 simulations were performed on 20 nodes with an
overall speedup of more than 18. Results for these simulations were fed back to a
master node and displayed as shown in Figure 23 with both oil productions and
economic impact of parameters. In the near future we will be performing additional
tests on larger grids in both loosely-bound and close local networks.

The issue of security is always of prime importance when considering computations


which go outside of a local firewall. The grid solution of United Devices provides a
simple way of accomplishing this. By replacing all I/O within the simulator with a
provided encryption algorithm, no unencrypted data exists on any of the remote nodes.
In this fashion security remains intact.

CONCLUSIONS

Due to the rapid growth of computing technology, reservoir simulation models have
been able to increase in size and complexity over the past several decades. Small
simulation models with only a few tens of cells on early computers gave way to
multimillion cell models running on large parallel computers and clusters. The
evolution included steps for “lumping” using pseudos to attempt to reduce the error
associated with upscaling. This error was minimized with the advent of parallel
computers, but some upscaling error still existed. The complexity of coupled surface-
subsurface modeling added even more to the demand for increased computing power,
but at the addition of a crucial component – the surface model – into the equation.
Investigation of coupled formulations showed that a fully-coupled model can be as
efficient as or more efficient than a partially coupled model with explicit coupling
when the coupling is weak. It can be considerably more efficient and accurate when
the coupling is strong. The generalization of the fully-coupled network model leads
directly to the modeling of advanced wells. Finally, the enormous computational
requirements of simulation to include uncertainty and assess risk can be delivered
through the use of grid computing. Grid computing will allow us to answer questions
and solve problems which were previously impossible.

THE FUTURE

There is no doubt that demand for larger and more complex models will continue to
grow. How to reconcile this with the problem of model uncertainty will continue to be
an area of research. One possibility is to utilize the ever-deceasing physical size of
compute clusters as part of a large grid network. In this fashion, model sizes can
remain large while at the same time risk can be assessed. The ability to simulate
thousands of million-cell models remains a goal. With such a simulation, it will be
possible to better-understand the impact of upscaling on risk and uncertainty of the
simulation models. The ever-increasing model size has already required the move to
64-bit technologies. With the recent introduction of 64-bit commodity chips from both
Intel (Itanium-2) and AMD (Opteron), 64-bit computing has begun to move into the
area of commodity clusters. This will likely continue to grow although the path of
growth is uncertain at this time. Finally, the move to unstructured grids for reservoir

20
simulation has already begun. It is likely, that by the end of the decade most models
will be based on some form of unstructured grid with tightly coupled surface networks.

REFERENCES

1. Coats, K. H., Dempsey, J. R., and Henderson, J. H., “The Use of Vertical
Equilibrium in Two- Dimensional Simulation of Three-Dimensional Reservoir
Performance”, SPEJ, March 1971, 63-71.
2. Hearn, C. L., “Simulation of Stratified Waterflooding by Pseudo Relative
Permeability Curves”, J. Pet. Tech., July 1971, 805-813.
3. Jacks, H. H., Smith, O. J. E., and Mattax, C. C., “The Modeling of a Three-
Dimensional Reservoir with a Two-Dimensional Reservoir Simulator – The Use of
Dynamic Pseudo Relative Functions”, SPEJ, June 1973, 175-185.
4. Fang, Y. P., and Killough, J.E., “Viscous/Gravity Scaling of Pseudo Relative
Permeabilities for the Simulation of Moderately Heterogeneous Reservoirs”,
Proceedings of the Conference of the Mathematics of Oil Recovery, Cambridge,
England, July, 1989.
5. Shirer, J. A., Ainsworth, W. J., and White, R. W., “Selection of a Waterflood
Pattern for the Jay-Little Escambia Creek Fields, SPE 4978 presented at the 49th
SPE Annual Fall Meeting, October, 1974.
6. Killough, J. E., and Foster, H. P., Jr., “Reservoir Simulation of the Empire Abo
Field: The Use of Pseudos in a Multolayered System”, Society of Petroleum
Engineers Journal, October, 1979, 279-288.
7. Killough, J. E., Pavlas, E. J., Martin, C., and Doughty, R. K., “The Prudhoe Bay
Field: Simulation of a Complex Reservoir”, SPE 10023 presented at the First
International SPC-CPS meeting, Beijing, China, 1982.
8. Barker, John W., Sylvain, Thibeau, SEP 35491 presented at the European 3-D
Reservoir Modeling Conference, Stavanger, April 16-17, 1996.
9. Killough, J. E., and Redwine, W. V., “The Use of Vector Processors in Reservoir
Simulation”, SPE 7673 presented at the Fifth SPE Symposium on Reservoir
Simulation, Denver, Colorado, February 1-2, 1979.
10. Nolen, James S., Kuba, D. W., and Kascic, M. J., Jr., “Application of Vector
Processors to the Solution of Finite Difference Equations”, SPE 7675 presented at
the Fifth SPE Symposium on Reservoir Simulation, Denver, Colorado, February 1-
2, 1979.
11. Killough, J. E., and Levesque, J. M., “Reservoir Simulation and the In-house Vector
Processor”, SPE 10521 presented at the Sixth SPE Symposium on Reservoir
Simulation, New Orleans, January 31-February 3, 1982.
12. Young, L. C., “Equation of State Compositional Modeling on Vector Processors”,
SPE 16023 presented at the Ninth SPE Symposium on Reservoir Simulation, San
Antonio, February 1-4, 1987.
13. Killough, J. E., and Wheeler, M. F., "Parallel Iterative Linear Equation Solvers: An
Investigation of Domain Decomposition Algorithms for Reservoir Simulation",
SPE 16021 presented at the 9th SPE Symposium on Reservoir Simulation, San
Antonio, Texas, Feb. 1-4, 1987.

21
14. Killough, J. E., “Is Parallel Computing Ready for Reservoir Simulation?”, SPE
26634 presented at the 68th SPE Annual Fall Conference and Exhibition, Houston,
October 3–6, 1993.
15. Killough, John E., Foster, John A., Nolen, James S., Wallis, John R., and Xiao,
Jason, “A General-Purpose Parallel Reservoir Simulator”, presented at the 5th
European Conference on the Mathematics of Oil Recovery, Leoben, Austria, 3-6
Sept., 1996.
16. van Daalen, D. T., Hoogerbrugge, P. J., Meijerink, J. A., and Zeestraten, P. J. A.,
"The Parallelization of BOSIM, Shell's Black/ Volatile Oil Reservoir Simulator",
Proceedings of the First IMA/SPE European Conference on the Mathematics of Oil
Recovery, Oxford University Press, 1990.
17. Killough, J. E., and Bhogeswara Rao, "Simulation of Compositional Reservoir
Phenomena on a Distributed Memory Parallel Computer", Journal of Petroleum
Technology, November, 1991.
18. Wheeler, J. A., and Smith, R. A., "Reservoir Simulation on a Hypercube", SPE
19804 presented at the 64th SPE Annual Conference and Exhibition, San Antonio,
October, 1989.
19. Rutledge, J.M., Jones, D. R., Chen, W. H., Chung, E. Y.,"The Use of a Massively
Parallel SIMD Computer for Reservoir Simulation", SPE 21213 presented at the
eleventh SPE Symposium on Reservoir Simulation, Anaheim, 1991.
20. Gautam S. Shiralkar, R. Volz, R. Stephenson, M. Valle, and K. Hird, “Parallel
Computing Alters Approaches, Raises Integration Challenges in Reservoir
Modeling”, Oil and Gas Journal, May 20, 1996, 48-56.
21. Tchelepi, H. A., Durlovsky, L. J., Chen, W. J., Bernath, A., and Chien, M. C. H.,
“Practical Use of Scale Up and Parallel Reservoir Simulation Technologies in Field
Studies”, SPE 38886 presented at the 1997 SPE Annual Conference and Exhibition,
San Antonio, October 5-8, 1997.
22. Chien, M. C. H., Tchelepi, H. A., Yardumain, H. E., and Chen, W. H., “A Scalable
Parallel Multi-Purpose Reservoir Simulator”, SPE 37976 presented at the SPE
Symposium on Reservoir Simulation, Dallas, Jun 8-11, 1997.
23. Shiralkar, H. S., Stepheson, R. E., Joubert, W., Lubeck, O. and van Bloemen
Waanders, B., “Falcon: A Production-Quality Distributed-Memory Reservoir
Simulator”, SPE 37975 presented at the SPE Symposium on Reservoir Simulation,
Dallas, Jun 8-11, 1997.
24. Wang, P., Yotov, I., Wheeler, M., Arbogast, T., Dawson, C., Parashar, M.,
Sepehrnoori, K., “A New Generation EOS Compositional Reservoir Simulator: Part
I – Formulation and Discretization”, SPE 37979 presented at the SPE Symposium
on Reservoir Simulation, Dallas, Jun 8-11, 1997.
25. Nolen, J. S., and Stanat, P. L., "Reservoir Simulation on Vector Processing
Computers", SPE 9644 presented at the SPE Middle East Oil Technical Conference,
Manama, Bahrain, March, 1981.
26. Wallis, J. R., Kendall, R. P., and Little, T. E., “Constrained Residual Acceleration
of Conjugate Residual Methods”, SPE 13536 presented at the Eight SPE Reservoir
Simulation Symposium, Dallas, Texas, February 10-13, 1985.
27. Wallis, J. R., and Nolen, J. S., “Efficient Linear Solution of Locally Refined Grids
Using Algebraic Multilevel Approximate Factorizations”, SPE 25239 presented at
the Twelfth SPE Symposium on Reservoir Simulation, New Orleans, Louisiana,
February 28-March 3, 1993.

22
28. Killough, J. E., Nolen, J. S., Wallis, J. R., Xiao, J., and Forster, J., “A Parallel
Reservior Simulator Based on Local Grid Refinement”, SPE 37978 presented at the
SPE Reservoir Simulation Symposium, June 8-11, 1997, Dallas.
29. Fanchi, J. R., “Boast-DRC: Black-Oil and Condensate Reservoir Simulation on an
IBM-PC”, SPE 15297 presented at the SPE Petroleum Industry Applications of
Microcomputers, Silver Creek , Colorado, June 18-20, 1986.
30. Brooks, R. J., and Jepson, P. J., “Application of a Full-Scale Compositional
Reservoir Simulation Package on a Microcomputer”, SPE 17014 presented at the
SPE Petroleum Industry Applications of Microcomputers, Del Lago, Texas, June
23-26, 1987.
31. Choo, Y. K., and Welch II, V. S., “A Comprehensive Black-oil Simulator for
Microcomputer Applications”, ”, SPE 16490 presented at the SPE Petroleum
Industry Applications of Microcomputers, Del Lago, Texas, June 23-26, 1987.
32. Young, L. C., and Hemanth-Kumar, K., “Compositional Reservoir Simulation on
Micorcomputers”, SPE 19123 presented athe the Fourth SPE Petroleum Computer
Conference, San Antonio, June 26-28, 1989.
33. Young, L. C., Hemanth-Kumar, K., and Bratvold, R. B., “Reservoir Simulation on
Low-Cost, High-Performance Workstations”, SPE 20361 presented at the Fifth SPE
Petroleum Computer Conference, Denver, June 25-28, 1990.
34. Neil Randall, “The State of Processors,” PC Magazine, July 1998, pp. 307, 310.
35. S. L. Scott, et al., “Application of Parallel (MIMD) Computers to Reservoir
Simulation,” paper SPE 16020 presented at the 1987 SPE Reservoir Simulation
Symposium, San Antonio, Feb. 1-4.
36. M. C. H. Chien, et al., “The Use of Vectorization and Parallel Processing for
Reservoir Simulation,” paper SPE 16025 presented at the 1987 SPE Reservoir
Simulation Symposium, San Antonio, Feb. 1-4.
37. J. R. Wallis, J. A. Foster, and R. P. Kendall, “A New Parallel Iterative Linear
Solution Method for large-scale Reservoir Simulation,” SPE 21209 presented at the
Eleventh SPE Symposium on Reservoir Simulation, Anaheim, California, February
17-20, 1991.
38. Jobalia, Mihir, “A Receiver-Initiated Load Balancing Technique for Reservoir
Simulation”, Master’s Degree Thesis presented to the Department of Chemical
Engineering, University of Houston, December, 1994, John E. Killough, advisor.
39. Song, T., “A Load-Balancing Technique for Reservoir Simulation Based on
Dantzig’s Transportation Model”, Master’s Degree Thesis presented to the
Department of Chemical Engineering, University of Houston, December, 1996,
John E. Killough, advisor.
40. Bogeshwara, R. and Killough, J. E., “Parallel Linear Solvers for Reservoir
Simulation: A Generic Approach for Existing and Emerging Computer
Architectures”, SPE 25240 presented at the Twelfth SPE Reservoir Simulation
Symposium, New Orleans, February 28-March 3, 1993.
41. Burrows, Richard, Ponting, Dave, and Wood, Lindsay, “Parallel Simulation with
Nested Factorisation”, presented at the 5th European Conference on the
Mathematics of Oil Recovery, Leoben, Austria, 3-6 Sept., 1996.
42. Killough, J.E., and Commander, D., “Scalable Parallel Reservoir Simulation on a
Windows NT-Based Workstation Cluster”, SPE 51883 presented at the SPE
Reservoir Simulation Symposium, Houston, February 14-17, 1999.

23
43. Jutila, H.A., Logmo-Ngog, A.B., Sarkar, R., and Killough, J. E., “Use of Parallel
Compositional Simulation to Investigate Productivity Impovement Options for a
Retrograde-Gas-Condensate Field: A Case Study”, SPE 66397 presented at the
Sixteenth SPE Symposium on Reservoir Simulation, February, 2001.
44. Mrosovsky, I., and Ridings, R. L., “Two-Dimensional Radial Treatment of Wells
Within a 3D Model”, SPEJ, April 1974, 127-131.
45. Quandalle, P, and Besset, F’.: “The Use of Flexible Gridding for Improved
Reservoir Modeling,” paper SPE 12239 presented at the 1983 Seventh SPE
Symposium on Reservoir Simulation San Francisco, Nov. 15-18.
46. Aziz, K, and Settari, A.: Petroleum Reservoir Simulation Applied Science
Publishers, London.
47. Heinemann, Z,, Gerken, G. and von Hantelmann, G,: “Using Local Grid
Refinement in a Multiple Application Reservoir Simulation,” paper SPE 12255
presented at the 1983 Seventh SPE Symposium on Reservoir Simulation, San
Francisco, Nov. 15-18.
48. Forsyth, P,A, and Sammon, P,H,: “Local Mesh Refinement and Modelling of Faults
and Pinchouts,” paper SPE 13524 presented at the1985 Eighth SPE Symposium on
Reservoir Simulation, Dallas, Feb. 10-13.
49. Coats, B. K., Fleming, G. C., Watts, J. W., Rame, M., and Shiralkar, G., “A
Generalized Wellbore and Surface Facility Model, Fully Coupled to a Reservoir
Simulator”, SPE 79704 presented at the SPE Reservoir Simulation Symposium,
February 3-5, 2003, Houston, Texas.
50. Dempsey, J.R. et. al., “An Efficient Method for Evaluating Gas Field Gathering
System Design,” JPT, September, 1971, p. 1067-1073.
51. Startzman, R.A., Brummett, W.M., Ranney, J.C., Emanuel, A.S., and Toronyi,
R.M.: “Computer Combines Offshore Facilities and Reservoir Forecasts,”
Petroleum Engineer, May 1977, p. 65-76.
52. Emanuel, A.S. and Ranney, J.C.: Studies of Offshore Reservoir with an Interfaced
Reservoir – Piping Network Simulator,” JPT, March 1981, p. 399-406
53. Litvak, M.L. and Darlow, B.L.: “Surface Network and Well Tubinghead Pressure
Constraints in Compositional Simulation,” SPE 29125 presented at the 13th SPE
Symposium on Reservoir Simulation held in San Antonio, Texas, February 12-15,
1995.
54. Litvak, M.L., Clark, A.J., Fairchild, J.W., Fossum, M.P., Macdonald, C.J., and
Wood, A.R.O.: “Integration of Prudhoe Bay Surface Pipeline Network and Full
Field Reservoir Models,” SPE 38885 presented at the 1997 SPE Annual Technical
Conference and Exhibition held in San Antonio, Texas, October 5-8, 1997.
55. Litvak, M.L and Wang, C.H.: “Integrated Reservoir and Surface Pipeline Network
Compositional Simulators,” SPE 48859 presented at the 1998 SPE International
Conference and Exhibition in China held in Beijing, China, November 2-6, 1998.
56. Litvak, M.L., Macdonald, C.J. and Darlow, B.L: “Validation and Automatic Tuning
of Integrated Reservoir and Surface Pipeline Network Models,” SPE 56621
presented at the 1999 SPE Annual Technical Conference and Exhibition held in
Houston, Texas, October 3-6, 1999.
57. Wang, P., Litvak, M.L., and Aziz, K.: “Optimization of Production Operations in
Petroleum Fields,” SPE 77658 presented at the SPE Annual Technical Conference
and Exhibition held in San Antonio, Texas, September 29 – October 2, 2002.

24
58. Litvak, M.L., Hutchins, L.A., Skinner, R.C., Darlow, B.L., Wood, R.C., and Kuest,
L.J.: “Prudhoe Bay E-Field Production Optimization System Based on Integrated
Reservoir and Facility Simulation,” SPE 77643 presented at the SPE Annual
Technical Conference and Exhibition held in San Antonio, Texas, September 29 –
October 2, 2002.
59. Schiozer, D.J. and Aziz, K.: “Use of Domain Decomposition for Simultaneous
Simulation of Reservoir and Surface Facilities,” SPE 27876 presented at the SPE
Western Regional Meeting held in Long Beach, California, March 23-25, 1994.
60. Byer, T.J., Edwards, M.G., and Aziz, K.: “Preconditioned Newton Methods for
Fully Coupled Reservoir and Surface Facilities Models,” SPE 49001 presented at
the 1998 SPE Annual Technical Conference and Exhibition held in New Orleans,
Louisiana, September 27-30, 1998,
61. Byer, T.J., Edwards, M.G., and Aziz, K: “A preconditioned Adaptive Implicit
Method for Reservoirs with Surface Facilities,” SPE 51895 presented at the 1999
SPE Reservoir Simulation Symposium held in Houston, Texas, February 14-17,
1999.
62. Byer, T.J., Preconditioned Newton Methods for Simulation of Reservoirs with
Surface Facilities, Ph. D. Dissertation, Stanford University, 2000.
63. Hepguler, G., Barua, S and Bard, W.: “Integration of a Field Surface and
Production Network With a Reservoir Simulator,” SPE 38937, SPE Computer
Applications, June 1997, p. 88-93.
64. Tingas, J., Frimpong, R. and Liou, J.: “Integrated Reservoir and Surface Network
Simulation in Reservoir Management of Southern North Sea Gas Reservoirs,” SPE
50635 presented at the 1998 SPE European Petroleum Conference held in The
Hague, The Netherlands, October 20-22, 1998.
65. Trick, M.D.: “A Different Approach to Coupling a Reservoir Simulator with a
Surface Facilities Model,” SPE 40001 presented at the SPE Gas Technology
Symposium, Calgary, March 1998.
66. Gorell, Sheldon, and Bassett, Bob, “Trends in Reservoir Simulation: Big Models,
Scalable Models? Will You Please Make Up Your Mind”, SPE 71596 presented at
the 2001 SPE Annual Technical Conference and Exhibition, New Orleans,
Louisana, 30 September-3 October 2001.
67. Narajanan, Keshav, and Cullick, A. S., “Better Field Development Decisions from
Multi-Scenario, Interdependent Reservoir, Well, and Facility Simulations”, SPE
79703 presented at the 2003 SPE Reservoir Simulation Symposium, Houston,
Texas, February 3-5.

25
Figure 1: Pseudoization Process – Reducing a 3D Model to 2D

26
Ooriginal GOC

GOR

Height of
Oil Column
Above Perfs
(Ho) {
Ho

Figure 2: Typical Coning Correlations

STP
N
Point McIntyre Prudhoe Bay
West Dock

Niakuk

BP ARCO
LG1
Kuparuk
40 Miles CGF
River

Prudhoe Bay
Kuparuk

CCP
GC1/GLT/SIP
GC2

BPXA Operations GC3 LPC


Central
Center (BOC) Power
Plant SIP FS2
ARCO
FS3 Operations
Pump Station No.1 FS1 Center
(PBOC)
er
Riv

Trans Alaska
Pipeline
0 1 2 3

Deadhorse
k
to

PROCESSING FACILITIES Mile


irk

Airstrip
an
av
ag
S

WELLPADS

Figure 3: The Prudhoe Bay Field – Areal View of Surface Facilities

27
Simulation
Scale

Geologic Scale
(NO Upscaling)

Figure 4: Comparison of Oil Production for Effect of Upscaling

28
32.0 32.0
28.0 28.0
24.0 24.0

Speedup
20.0 20.0
16.0 16.0
12.0 12.0
8.0 8.0
4.0 4.0
0.0 0.0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Parallel Component Fraction


Figure 5: Amdahl’s Law for Parallel and Vector Processors

Figure 6: Grid and Structure for Heterogeneous Anticline Model

29
4500 2.0x
4000 Pentium 4
3500
Cluster
3000 3.5x
2500
Time
2000 6.4x
1500
1000 12x
19x 35x
500
0

CPUs

CPUs

CPUs

16 CPUs

64 CPUS
32 CPUs
4

8
2

Figure 7: Results for Pentium 4 Cluster w/Myrinet for 1-Million Cell


Compositional Model

Figure 8: Cross-Sections of North Sea Condensate Study

30
Figure 9: Grid for North Sea Condensate Study

Figure 10: Locations of Well Refinements for North Sea Condensate Study

31
Figure 11: Areal View of Well Refinement for Condensate Study

Figure 12: Cross-section Showing Well Refinements for Condensate Study

2.0x

3.5x
32
6.4x
12x
Figure 13: Example of Loosely-Coupled Surface Network with Simulator and
GAP

Figure 14: Example of Closely-Bound Surface Network and Reservoir Simulator

33
Figure 15: Example for Simulation of Algerian Fields with Closely-Bound Surface
Network

Pressure (PSI)
NoTun Tun Obs

1400
1200
1000
800
600
400
200
0
4/2 4/3 4/4 4/5 4/6 4/7 4/8 4/9 4/10 4/11 4/12 4/13 4/14 4/15 4/16

Figure 16: Example of the Effect of Tuning on Match of Surface Network


Manifold Pressures

34
Figure 17: Typical Grid Computing Environment

Figure 18: Flow Chart of Typical Grid with Servers, Database and Agent

35
Figure 19: 125,000 Cell Example Simulation for Grid Computing Test

Figure 20: Locations of Grid Nodes for Grid Computing Test

36
Figure 21: The Limewire Gnutella Network

Figure 22: Performance of a Small Grid for a Protoype Simulation

37
Figure 23: Results from Prototype Grid Simulation Showing Oil Production and
Economic Sensitivities

38

Você também pode gostar