(769c) Online Process Optimization Using a Surrogate Optimizer | AIChE

(769c) Online Process Optimization Using a Surrogate Optimizer

Authors 

Krishnamoorthy, D. - Presenter, Harvard John A. Paulson School of Engineering and
dos Santos, A., Norwegian University of Science and Technology
Skogestad, S., Norwegian University of Science and Technology
Real-time optimization is traditionally based on rigorous steady-state process models that are used by a numerical optimization solver to compute the optimal inputs and setpoints. The optimization problem needs to be re-solved every time a disturbance occurs. This step is also known as “data reconciliation”. Since steady-state process models are used, it is necessary to wait so the plant has settled to a new steady-state before updating the model parameters and estimating the disturbances. It was noted by Darby et al. 1 that this steady-state wait time is one of the fundamental limitations of the traditional RTO approach.

Among others, one of the challenges with online process optimization is the computational issues including numerical robustness. For example, in many large-scale problems, solvers may take a long time to converge to the optimal solution or, in some cases, may even fail to converge to an optimal solution. Addressing the computational bottleneck is therefore an important aspect of addressing the computational issues of solving online optimization problems.

In order to avoid solving numerical optimization problem online, we aim to approximate computationally intensive optimization problems using machine-learning algorithms. Instead of developing surrogate models that will be used in the optimizer, we propose to build “surrogate optimizers” or “AI optimizers” that approximate the numerical optimization solvers. By doing so, we move the computationally intensive optimization tasks offline and use the surrogate optimizers online to compute the optimal solution.

Open-loop surrogate network from measured disturbances - One way to achieve this is to use a feedforward approach where we train a neural network from the measured disturbances to the optimal setpoints . This is analogous to multi-parametric programming, where the models are used offline to generate an optimal solution space as a function of the parameters (Pistikopolous, 2009). Here, instead of using a pre-computed solution space using first-principle models, we use a neural that approximates the optimization block.

Closed-loop surrogate network from available measurements– In many cases, the disturbances are not measured. Alternatively, models can also be trained to directly map the relation between the available measurements y to the optimal setpoints . In this structure, the neural network approximates both the model update as well as the optimization block, hence eliminating the need for process disturbances to be measured/estimated.

Gradient based approach - Another approach is to train a network that directly maps the measurements y to the gradient from the inputs to the cost .The cost gradients can then be estimated online using the trained model and real-time measurement data. Optimal operation can then be achieved by simply controlling the estimated gradients to a constant setpoint of zero, thereby achieving the necessary condition of optimality (Srinivasan et. Al. 2011, Krishnamoorthy et al. 2019).

The two main advantages of using such surrogate optimization approach are, 1) we need not explicitly solve numerical optimization problems online and 2) we alleviate the steady-state wait time issue. The effectiveness of the different surrogate optimization approaches will be demonstrated using CSTR and oil production optimization case examples.

References

  1. Pistikopoulos, E. N. Perspectives in multiparametric programming and explicit model predictive control. AIChE Journal2009, 55 (8), 1918-1925.
  2. Krishnamoorthy, D., Jahanshahi, E. and Skogestad, S., 2018. Feedback Real-Time Optimization Strategy Using a Novel Steady-state Gradient Estimate and Transient Measurements. Ind. & Eng. Chem. Res., 58(1), pp.207-216.
  3. Srinivasan, B., François, G. and Bonvin, D., 2011. Comparison of gradient estimation methods for real-time optimization. In Computer Aided Chemical Engineering (Vol. 29, pp. 607-611). Elsevier.