Reservoir engineering has been around for at least 70 years, so it is not foreign to the use of applied mathematics of its time. A non exhaustive list of early methods could include:
-Buckley-Leverett Displacement Theory (1942) simple mathematical model.
-Introduction of Material Balance Equation (1940s)with relatively simple models as well.
-Arie van Everdingen innovations occurred in 1949.
In that year, Arie van Everdingen, along with his colleague Willem Hurst, published a seminal paper titled "The Application of the Laplace Transformation to Flow Problems in Reservoirs." This work introduced the van Everdingen and Hurst method,
-Development of Reservoir Simulation Models (1950s-1960s):including among other methods Finite differences.
Nowadays, the most basic and standard process would be to adjunct Machine Learning and other AI related innovations to simply apply them to reservoir engineering, and by any mean suit yourself.
But here I want to explore a sideview, an uncommon idea, it seems to me that without applying AI/ML even to the least extent to “Conventional” pre-ML/AI reservoir engineering we could derive from methodologies found out to be useful in ML a methodology that could be useful to “Conventional” pre-ML/AI reservoir engineering, and especially within History Matching.
Indeed if one explores the following figure it can be seen that in ML, in the case of Supervised ML the AI model is trained with a set of historical data, so far nothing exceptional. But then this Historical data is split between a Training Data set and a Test Data set. It appeared to be an efficient way of improving and checking the quality of the model.
(Source :Understanding Machine Learning, DataCamp)
What it suggests to me:
• While reservoir engineers are technically also data scientists with a headstart of 20-40 years or more in History Matching, and also in the construction of predictive models before modern ML/AI
• It must be said that the habit of splitting available historical data into training set and test set was never much a habit there, it was unfortunate and transferring the habit there should become customary especially when we are in Field Development stage with an almost absence of production data, but existing recent pressure gauge data from the future producing well, or nearby exploration wells, or further away regional wells that could have been in production for longer times.
• An example could be: a past “known” aquifer network model, or regional pressure assessment that could be available instead of production data from the well that is being developed in Field Development.
• A Split in time could allow to test in advance the quality of the model.
• But one might ask: “Why is it an advantage?”. Well in Reservoir History Matching the workforce has been so used to condition the model with the entire Historical data available that it has “implicitely” inferred maybe by a sort of lazyness or due to tight deadlines (Thus the need for promoting Research) that the “best model” is the one that is fully constrained by all the data available. Although it is not per say a bad process, it could happen that the better model is less constrained and more flexible and that allowing this split between Training Data and Test Data would help to find occasionally a better predictor.
That concludes my recommendation to use in some instances this methodology originating from ML/AI processes and apply it to History Matching for Reservoir Engineering models.
Author: Ali Cherif Azi
Senior Reservoir Engineer
Founder of Welltest Nordisk
https://www.welltestnordisk.com/