(431d) Accelerated Deep Learning-Based Data Assimilation and CO2 Injection Optimization at the Illinois Basin-Decatur Carbon Sequestration Project | AIChE

(431d) Accelerated Deep Learning-Based Data Assimilation and CO2 Injection Optimization at the Illinois Basin-Decatur Carbon Sequestration Project

Authors 

Sakai, T. - Presenter, Texas A&M University
Chan, C. H., Texas A&M University
Datta-Gupta, A., Texas A&M University
Carbon capture and storage (CCS) is widely considered to be one of the promising technologies for reducing greenhouse gas emission. Extensive research and field tests have been conducted to optimize the CO2 storage operation in different subsurface environments, such as depleted oil and gas fields or deep saline aquifers. It is essential to monitor the CO2 plume effectively throughout the life cycle of a geologic CO2 sequestration project to ensure safety and storage efficiency. However, there are several underlying challenges. First, field-scale high resolution numerical simulations of CO2 storage are computationally expensive involving complex physics associated with multi-component and non-isothermal fluid flow. Existing data assimilation methods entail solution of a non-linear inverse problem requiring numerous flow simulations, making it unscalable for field applications in many cases. Second, CO2 injection into the subsurface involves various risks, such as CO2 leakage and potential for induced seismicity. It is imperative to conduct a comprehensive evaluation considering multiple different factors, including pressure maintenance, seal integrity, and the potential for CO2 leakage from legacy wells, natural fractures, and faults. The optimization of CO2 injection project needs to involve multiple objectives, and these objectives often exhibit trade-offs, rendering the optimization of CO2 injection schedule challenging. We address all these challenges by leveraging the recent advancements in deep learning techniques, which significantly accelerate field-scale data assimilation and multi-objective optimization.

Our proposed approach is composed of two parts: data assimilation to calibrate the dynamic reservoir model by integrating field monitoring data, and then, optimization of the CO2 injection schedule. In the data assimilation framework, a deep neural network model is developed that takes available monitoring data such as well pressure and temperature measurements as inputs and predicts representative images of the subsurface flow field. The conventional approach to build a deep learning model involves a neural network architecture where the input data consists of the field monitoring data, and the output is pressure and CO2 saturation distribution at different timesteps. However, the high dimensionality of the spatiotemporal resolution of the pressure and saturation images makes the neural network training inefficient and impractical for field-scale applications. We improve the efficiency and scalability of the framework in three ways. First, instead of using multiple pressure and CO2 saturation maps for different timesteps, a single diffusive time of flight (DTOF) map is used for the output as a representative subsurface image. The DTOF represents the arrival time of pressure front propagation, which can be computed rapidly by the Fast Marching Method (FMM), an efficient front tracking approach that does not require expensive fluid flow simulations. Since the reservoir dynamics is compressed into a single DTOF image, the memory requirements and computational costs are reduced significantly, and also, the neural network architecture is simplified considerably. Second, we apply ‘optimal’ grid coarsening of geologic models to reduce the simulation time while preserving the simulation accuracy. The coarsening scheme is based on a ‘bias-variance trade-off’ that maximizes the computational time savings while minimizing the error in simulated monitoring data, such as well pressure and temperature data. Third, we apply a variational autoencoder-decoder (VAE) network to compress high dimensional DTOF images into lower dimensional latent variables. The use of image compression with VAE considerably simplifies the neural network architecture, enabling efficient training for large-scale reservoir applications. A regression model, consisting of feed forward and convolutional neural networks, is trained to predict latent variables of the VAE based on the available monitoring data. Subsequently, the predicted latent variables are fed into the trained decoder network to estimate the 3D DTOF image. The VAE can provide multiple DTOF image predictions to account for underlying uncertainties. Finally, the trained neural network can be used for reservoir model calibration and prediction of the CO2 plume migration using observed monitoring data. Calibrated reservoir models are identified by simply selecting multiple plausible realizations of the training data samples close to the predicted DTOF map based on the field monitoring data.

The calibrated reservoir models are used for optimizing the CO2 injection schedule. The proposed optimization workflow utilizes a recently developed architecture, Fourier Neural Operator (FNO), as a data-driven proxy model. The input of the FNO model includes permeability distribution, porosity distribution, and CO2 injection schedule, and it estimates the subsurface pressure and CO2 saturation images as output. The most time-consuming part of the data-driven framework is the training data generation, requiring numerous flow simulations. This challenge can be addressed by exploiting the super-resolution feature of the FNO. The FNO model can be trained using low resolution image data, and the trained model can predict high resolution image data by interpolating the solution within the Fourier domain. Therefore, training data generation can be conducted using coarsened reservoir models, reducing the computational cost significantly. Areal model coarsening is applied to the permeability and porosity fields, and training dataset is generated by running the numerical simulations with different CO2 injection schedules. The simulations provide coarse scale pressure and saturation distributions as training data. Although the FNO model is trained using the coarse scale data, the trained model can estimate fine scale pressure and CO2 saturation distributions by leveraging the super-resolution feature without significantly compromising the accuracy. For CO2 injection schedule optimization, a multi-objective genetic algorithm (MOGA) is utilized to properly account for the multiple and potentially conflicting objectives. The primary advantage of MOGA is its ability to optimize each objective function individually, unlike other optimization algorithms that tend to aggregate all objective functions into a single function. In the optimization framework, the FNO-based proxy model is used as a forward model and requires only a few seconds as opposed to several hours for pressure and CO2 saturation predictions. The objective of the optimization includes three different components: minimizing subsurface pressure increase, maximizing total CO2 injection amount, and maximizing storage efficiency. Since MOGA provides multiple optimal realizations on the Pareto front, users can select the most desirable realizations based on their specific requirements or constraints, such as maximum allowable pressure or target CO2 amount to be stored.

The power and efficacy of our approach is demonstrated by applying to the Illinois Basin-Decatur Project (IBDP), a large-scale CO2 storage test in saline aquifer. The data assimilation process is implemented with given field measurements including distributed pressure and distributed temperature sensing (DTS) data at an injection and a monitoring well. The data assimilation can be conducted in a few seconds after the neural network training is completed. CO2 plume evolution is predicted by running the simulation of the calibrated reservoir models identified based on the VAE latent space model selection. Subsequently, the calibrated reservoir model is utilized for the optimization framework. Training data for the FNO proxy model is generated at the coarse scale whereby an order of magnitude computational speed up is achieved compared to the original fine scale simulations. The accuracy of the trained proxy model is verified by comparing with a commercial reservoir simulator, and the prediction of the CO2 saturation and pressure distribution takes only about 10 seconds as opposed to several hours with the original fine scale simulation. Next, the MOGA is applied to optimize the CO2 injection schedule using the fast FNO proxy model. The multiple objectives considered in the optimization framework are improved considerably, providing optimized CO2 injection schedules. Orders of magnitude speed up is achieved compared to the traditional workflow that uses numerical simulation as a forward model.

Our proposed deep learning-based framework enables rapid data assimilation and injection schedule optimization for large-scale CO2 storage applications. The data assimilation framework is significantly accelerated by exploiting the efficacy of deep learning, the concept of DTOF, and optimal coarsening of the geologic models. The evolution of the CO2 plume images is predicted by running the calibrated reservoir models. In the optimization framework, the super-resolution feature in FNO reduces the computational cost associated with the training data generation significantly. Furthermore, the proxy model accelerates forward simulation by orders of magnitude, enabling examination of multiple optimization scenarios for large-scale field cases while considering the complex physic involved in the CO2 storage applications such as non-isothermal and compositional fluid flow.