research Research programme Module 3

Module 3

Model updating refers to the process of improvement of a mathematical (geo-cellular) representation of the geology, and prediction of fluid flow and pressure evolution during reservoir production and recovery. Matching begins with direct updating, an ETLP procedure in which certain obvious information or low hanging fruit (static or dynamic – such as fault structure, transmissibility, barriers, sand body distribution) are extracted directly from the 4D seismic interpretations to help honour all domains of discipline (this may also be termed ‘model maturation’). We recommend that the convergence between the 4D seismic data, production data, and a suitable model representing future forward predictions must take place slowly and according to certain calibration stages or decision gates. This allows one to gauge the amount of update that can be achieved for any given non-repeatability noise level in the data. Interestingly, there are numerous examples of improvements in this early stage of quantitative analysis (Staples et al. 2006, Joosten 2014), which is seen as an essential procedure before further more sophisticated analyses. In almost all studies it is observed that the match between the model predictions and the 4D seismic + production data can be significantly enhanced after this step. After this step however, it may still be essential to reduce the misfit further, but this now requires application of the process of assisted (seismic) history match (ASHM).

Part of Phase VI was spent developing workflows for implementing SHM using approaches such as ensemble methods, particle swarm, and evolutionary strategies. Direct updating and ASHM are known to be time-consuming processes. Thus, here we research new ways to provide an update by providing faster modelling or simulations, or creating a hierarchical understanding of the data input into the history match. Identification of fast tools for closing the (small) loop enable the geophysicist and reservoir engineer to more easily have a conversation about their shared problem. We also note that the nature of the platform for having that conversation needs to be defined. To achieve speed, yet retain robustness, we have reached back into the past to discover many helpful proxy approaches. Also in this module we start to assess how far we can take ASHM, what are the limits of the available signal and how much data do we need to input? By its very nature, this work strongly interfaces with other module topics such as 4D impedance inversion, pressure and saturation change estimation, and well2seis. The main focus for the innovation in Phase VII will be on how to use the additional information available from 4D seismic data in an efficient updating scheme.

There are three main topic themes in this module: fast-track, input data selection, and “grounded SHM” or practical case studies.

Fast-track SHM

Before 3D numerical simulation emerged, engineers were very inventive with the way they simulated the reservoir using mechanistic, empirical, data-driven or generalized physics-based models. This trend has continued to some extent due to the need for speed of solution as a guide to field management (for example, Dobbyn and Marsh 2001). Thus, in the literature there are solutions for flat areal models, communicating tank models, water or gas front prediction, streamlines, and many more ideas. Numerical simulation rendered these largely redundant, but of course increased the computational cost. A seismic history match requires fast simulations, and this is normally achieved by taking a sector of the full model, coarsening the model, or making sacrifices on computational stability for the numerical computation. These are clearly proxies of a kind, but are often not adequate and still take time to create. Other proxies developed using artificial neural networks also exist, but these are highly specialized and limited in application. Instead of this approach, here we aim to go back to the pre-numerical computation solutions and utilize them in a first order match to the seismic data. This may be adequate in many cases where the data quality is not necessarily high, but is clearly very case-dependent. We hope that these methods will help extend the applicability of the SHM approach and speed up convergence for a full, but slower, history match. Here we also intend to work with proxies for other stages in the workflow such as seismic modelling or the petroelastic model, existing sponsors will recognize that some of these aspects have already been researched in Phase VI, however this phase will extend and apply these ideas much further.

The following will be considered:

• Simulation speed:
– Proxy models – fast-track and/or accurate simulation of reservoir model
– Data-driven physics-based modelling
– Machine learning and data analytics?
– Matches to good and bad 4D seismic data
• Water and gas front prediction
• Extension to loop back into the geological model
• Fast turnaround/best procedure and workflows
• All round use of proxies to close the loop

Obidegwu 2015

Input data selection

In a conventional SHM, 4D seismic data is input into an objective function alongside production data, and their contributions are weighted according to the data and/or model covariance matrices. The objective function is optimised by large and powerful search engines that search an extremely complex solution space (for example, Oliver et al. 2008). This has been observed to work fairly well in practice, but may not be the best way to make use of the existing data, as we have observed from direct update that extraction of only a few key features from the 4D seismic data can provide a strong constraint on the simulation model and an adequate (80% solution) fit. Thus we raise the question here: how much data do we need and what kind of data should we use given our previous understanding of the additional information coming from seismic data such as post and pre-stack time-shifts? Such questions have been partially addressed in the binary history matching research (Obidegwu et al. 2016) of Phase VI, where excellent results at a fraction of the usual cost were achieved. We are now interested in extending this to ternary, and also preparing a comparative study between an analogue SHM and various discretised inputs – which may also include the production data. We would like to consider casting the production data into the same format as the seismic data for easier comparison in the objective function. We will also continue to investigate matches to waterfronts and gas fronts, and also how to define these from the 4D seismic data. Other innovative ideas will be pursued to reduce the input load for both the 4D seismic and the well data.

The following will be considered:

The misfit definition:
• Updating using saturation fronts
• Matching to binary, ternary, and comparative information study
• Direct updates, spatial localisation of the misfit evaluation
• Production data representations
• Application to field data, metrics for data quality
• Limits due to noise constraints
• Data redundancy – how much is actually needed

Grounding SHM

Here we consolidate a coherent workflow that aims to break barriers between disciplines by applying ETLP methods to case studies. We assess how far it is possible to go with an SHM in a variety of datasets. This module will purely focus on the application of existing techniques to different datasets and settings, to build up an understanding of where the need and requirement for the SHM arises. Thus practical problems relating to field management issues will be catalogued for these fields, and solutions sought both via SHM and direct 4D seismic interpretation. For this the workflow shown below becomes important, as this relates each dataset to what can be calibrated using the field data. This is key as early examples of SHM over-stretched the applicability and were merely examples of the mathematical technology rather than useful workflows for practical reservoir management. SHM has improved, but is considered an elite tool or used in the final polishing step in the 4D analysis. Is it really needed, can it be avoided, or is it a necessary step in the workflow? These are some of the questions we intend to address in this sub-module. Another important element of this work is the updating of the static model, in which we examine the scale of the geology required and appropriate geomodelling solutions. In all of the above, we seek many case studies to emphasise our approach and to mature our workflows.

Integrated seismic workflow for a calibrated match of the 4D seismic data with project data (Amini and MacBeth 2017).