(194g) Approximating the Solution of Dynamic Optimization Problems Via Deep Neural Operators | AIChE

(194g) Approximating the Solution of Dynamic Optimization Problems Via Deep Neural Operators

Authors 

Mitrai, I. - Presenter, University of Minnesota
Daoutidis, P., University of Minnesota-Twin Cities
Dynamic optimization problems arise frequently in the operation of process systems, where an optimization problem must be solved repeatedly online within a limited computational time budget while accounting for the dynamic behavior of the system. Typical examples include dynamic real-time optimization (DRTO) and model predictive control (MPC). In both cases, given an input function p(t) (such as set point or production target), one must find the optimal operation of the system over time, denoted by u*(t), such that a desired objective is optimized. Despite the significant algorithmic advances in the solution of dynamic optimization problems [1-3], the online solution of such problems remains challenging.

Recently, machine learning (ML) has been extensively used to reduce the computational time related to the solution of optimization problems [4]. In the context of DRTO and MPC, ML can be used approximate the dynamic behavior of the system with simpler surrogate models [5,6] or accelerate the optimization solver [7,8]. Despite the reduction in CPU time, these approaches still require the online solution of an optimization problem. The alternative is to approximate the solution of the dynamic optimization problem. Existing approaches approximate the solution of discretized dynamic optimization problems using neural networks [9,10] which are universal function approximators. However, the solution of a dynamic optimization problem is a function of time (for the case of systems described by ODEs), i.e., u*(t) = G(p)(t) with G being an operator that maps p(t) to the optimal solution u*(t) . Therefore, predicting the optimal solution requires learning the operator G.

Deep Neural Operators (DeepONets) are neural network architectures that can approximate an operator by combining two networks [11]. The first is called the branch network and considers the input function p ( a = NNb(p)) , and the second considers the domain of the input function (φ=NNt(t)). The prediction is done by taking the dot product between the outputs of the branch and trunk networks. DeepONets have been used to approximate the solution of partial differential equations [12] and control laws for distributed parameter systems [13].

In this paper, we propose the application of DeepONets for approximating the solution of dynamic optimization problems arising in dynamic real-time optimization of chemical processes. We consider the case of a non-isothermal continuously stirred tank reactor where, given a production target, a DRTO problem is solved to determine the inlet flow rate and cooling/heating duties. The numerical results show that DeepONets lead to a significant reduction in computational time (up to three orders of magnitude), compared to solving the DRTO problem using mathematical optimization solvers, while incurring small relative approximation error (in the order of 10-1). Finally, we show that DeepONets have lower approximation error than standard feedforward neural network architectures. Overall, these results highlight the ability of the DeepONets to approximate the solution of DRTO problems alleviating the need to solve nonlinear optimization problems online.

References:

[1]. Vassiliadis, V.S., Sargent, R.W. and Pantelides, C.C., 1994. Solution of a class of multistage dynamic optimization problems. 1. Problems without path constraints. Industrial & Engineering Chemistry Research, 33(9), pp.2111-2122.

[2]. Vassiliadis, V.S., Sargent, R.W. and Pantelides, C.C., 1994. Solution of a class of multistage dynamic optimization problems. 2. Problems with path constraints. Industrial & Engineering Chemistry Research, 33(9), pp.2123-2133.

[3]. Biegler, L.T., 2024. Multi-level optimization strategies for large-scale nonlinear process systems. Computers & Chemical Engineering, p.108657.

[4]. Bengio, Y., Lodi, A. and Prouvost, A., 2021. Machine learning for combinatorial optimization: a methodological tour d’horizon. European Journal of Operational Research, 290(2), pp.405-421.

[5]. Ren, Y.M., Alhajeri, M.S., Luo, J., Chen, S., Abdullah, F., Wu, Z. and Christofides, P.D., 2022. A tutorial review of neural network modeling approaches for model predictive control. Computers & Chemical Engineering, 165, p.107956.

[6]. Kumar, P. and Rawlings, J.B., 2023. Structured nonlinear process modeling using neural networks and application to economic optimization. Computers & Chemical Engineering, 177, p.108314.

[7]. Mitrai, I. and Daoutidis, P., 2023. Taking the human out of decomposition-based optimization via artificial intelligence: Part II. Learning to initialize. arXiv preprint arXiv:2310.07082.

[8]. Mitrai, I. and Daoutidis, P., 2023. Computationally efficient solution of mixed integer model predictive control problems via machine learning aided Benders Decomposition. arXiv preprint arXiv:2309.16508.

[9]. Vaupel, Y., Hamacher, N.C., Caspari, A., Mhamdi, A., Kevrekidis, I.G. and Mitsos, A., 2020. Accelerating nonlinear model predictive control through machine learning. Journal of process control, 92, pp.261-270.

[10]. Karg, B. and Lucia, S., 2018, June. Deep learning-based embedded mixed-integer model predictive control. In 2018 european control conference (ecc) (pp. 2075-2080). IEEE.

[11]. Lu, L., Jin, P., Pang, G., Zhang, Z. and Karniadakis, G.E., 2021. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nature machine intelligence, 3(3), pp.218-229.

[12]. Wang, S., Wang, H. and Perdikaris, P., 2021. Learning the solution operator of parametric partial differential equations with physics-informed DeepONets. Science advances, 7(40), p.eabi8605.

[13]. Krstic, M., Bhan, L. and Shi, Y., 2024. Neural operators of backstepping controller and observer gain functions for reaction–diffusion PDEs. Automatica, 164, p.111649.