Tempest ENABLE Technical Data Sheet 2014

ENABLE Technical Description
ENABLE fits the proxy models to the simulator results using a
genetic algorithm approach that attempts to find a good fit
with the minimum number of terms. The Estimator is updated
as the results from further runs become available. Proxy model
coefficients are updated using a linear Bayes update formula.
At regular intervals the genetic algorithm calculation is used to
re-optimise the fit.
ENABLE provides mathematical support to reservoir engineers
in their use of reservoir simulation software. This support
allows engineers to complete tasks like history matching much
more quickly than using the simulator on its own and also
provides a more rigorous approach to predicting future
reservoir performance or optimising field development.
Input
The engineer defines a set of parameters (called modifiers)
that are used to modify the simulation model. The range and
initial probability distribution can also be set for each modifier.
All the workflows are based on approximating the simulator
with a simple mathematical model called an Estimator.
Figure 1. Graph showing Estimator uncertainty reducing as a result of refinement
runs and re-initialisations, until the user is confident that the proxy models are
good approximations to the simulator response.
The Estimator
Selecting the Next Run
The Estimator is a set of proxy models. Each proxy model fits
the behaviour of a simulator output defined by the user; this is
called an estimator point. Estimator points are selected on
simulator results (e.g. BHP, oil production, water cut) for wells
and well groups at selected times. There are three types of
estimator points:
Once the initial Estimator has been created, the modifier
values for new runs (refinement runs) are calculated by using
the Estimator. This calculation is an optimisation which uses a
combination of genetic algorithms for global optimisation and
gradient- based methods for local optimisation. The objective
function for this optimisation depends on the workflow:
1.History match points are attached to selected history data
2.Prediction points are chosen on results where an uncertainty calculation is required
3.Optimisation points are chosen on results to define an objective function for optimisation.
• for history matching and prediction with history data, the objective function is a modified likelihood function. The likelihood is a measure of the probability that the run is a match to the history data. The likelihood is modified to include a measure of the uncertainty of the proxy – this is designed to improve the estimator at the same time as searching for good history matches throughout the available modifier space
• for the prediction workflow without history data, the objective function is simply a measure of the uncertainty of the proxy model – as above, designed to improve the estimator
• for the optimisation workflow the objective function is defined by the user.
The Estimator has two parts: (i) a trend surface model which is
a polynomial function of the modifiers (up to 3rd order terms
are included) with first order interaction between modifiers,
and (ii) an additional term based on kriging that ensures the
proxy model agrees exactly with results obtained by
simulation runs.
Creating the Estimator
The Estimator is calculated by fitting the results of simulation
runs launched by ENABLE. The initial estimator is calculated
from a set of scoping runs (typically 25). The modifier values
for these scoping runs are calculated by an experimental
design based on Latin hypercube sampling.
Calculating Prediction Uncertainty
The prediction uncertainty is the probability distribution of the
simulator result at a prediction point selected by the user. This
distribution is based on sampling the probability distributions
of the modifiers. If there is no history data these are the (a
priori) distributions defined by the user. If there is history data
then these are the a posteriori modifier distributions which
include the effect of matching the history data. The a priori
modifier distributions are known distributions that can be
easily sampled using the inverse method. The a posteriori
distributions are sampled using the Markov Chain Monte Carlo
method. Prediction uncertainty can then be calculated using
two different methods. The first is to take a moderately small
sample (say 100) of parameter combinations to create a
posterior ensemble of simulation runs. The runs are
equi-probable, so the prediction percentiles can be calculated
for any result variable at any time by applying order statistics
to the run results. The second method for quantifying the
uncertainty in prediction is to sample the modifier space and
use the proxy model to calculate the simulator result at a
particular time. A combination of the two techniques can also
be used.
Figure 3. ENABLE samples the whole space and is able to capture the full range of
likely matches. Sampling bias is avoided.
The ENABLE Technical Advantage
The proxy technique used by ENABLE has several advantages
compared with other assisted simulation techniques:
o The proxy model is built automatically as part of the
workflow and is technically superior to other commercially
available algorithms
o History match is achieved with fewer runs than other
automation methods
o The workflow is continuous and allows all input to be taken
into the prediction phase without re-definition
o Prediction confidence intervals take into account the whole
uncertainty (see Figures 2 to 5) and can be quantified easily
for all variables at all times using the ensembles approach. It
is then possible to add a prediction point and use the proxy
to do quick ‘what-if’ analysis
o The flexible approach calculates reservoir performance
predictions for fields both with and without history in
exactly the same manner. There is no need to learn different
workflows
Figure 4. Two ‘good’ match areas are highlighted from the data in Figure 3.
Figure 5. The blue and red areas from Figure 4 are used to predict well behaviour.
The ENABLE technique is able to give the full range of uncertainty, whereas the
standard sampling approach can only give a narrow vision of future behaviour.
Figure 2. A standard (non-ENABLE) sampling approach starts at a random point.
*** Figures 2 to 5 courtesy of Dr Ian Vernon, University of Durham
The next point is chosen and a calculation made on whether this is a better, or
worse match. Subsequent points are chosen to improve the match, until a ‘most
likely’ point is sampled.
www.roxarsoftware.com