(10g) Global Dynamic Optimization Using Hardware-Accelerated Programming | AIChE

(10g) Global Dynamic Optimization Using Hardware-Accelerated Programming

Authors 

Gottlieb, R. - Presenter, University of Connecticut
Stuber, M., University of Connecticut
Dynamic optimization problems arise in cases where the transient behavior of a system is of prime importance, such as in mechanistic model validation [1] and robust model predictive control [2]. The parametric ordinary differential equation (pODE) systems that comprise these problems are often nonconvex in the parameters, necessitating a global solution. To locate guaranteed global solutions, deterministic search algorithms, such as spatial branch-and-bound (B&B) frameworks, are needed. In this work, we utilize a relax-then-discretize [1, 3, 4] approach and B&B to solve dynamic optimization problems to global optimality. In relax-then-discretize, the right-hand-side functions of the pODE system are relaxed to form an auxiliary pODE system that can be solved to furnish relaxations—convex underestimators and concave overestimators—of the original system. Since this auxiliary system depends on the parameter bounds of any given B&B node, the auxiliary system, a corresponding set of relaxations, and a lower bound on the optimal solution value are typically computed separately for each node.

In practice, creating an auxiliary pODE system follows known recursive rules (such as for generating αBB convex relaxations [5? ] or McCormick-based relaxations [6–8] and is computationally fast. However, solving a pODE system to evaluate relaxations and their corresponding subgradients [9, 10], and then using those evaluations to calculate a lower bound of the node in the B&B tree, can be computationally expensive. Even when applying methods that do not require the use of subgradients, such as the black-box sampling method proposed by Song et al. [11], there is still a significant real-time cost arising from the added relaxation evaluations needed to furnish a lower bound. In particular, because the structure of relaxations depends on the parameter domain, pointwise evaluations are typically computed serially, and thus the time cost of computing a lower bound for each node scales linearly with the number of evaluations.

In this presentation, we discuss our work to improve the real-time solution speed of global dynamic optimization problems by focusing on hardware-accelerated programming. We start by using a newly developed tool to automatically generate an auxiliary pODE system using McCormick-based relaxations through source-code transformation. The auxiliary system is constructed such that the node-specific structure of the relaxations is embedded within the right-hand-side functions. This embedding enables multiple simultaneous evaluations of the relaxed pODE system that can be distributed across multiple CPU or GPU cores independently of the node bounds in parameter space. We then employ the black-box sampling method of Song et al. [11] to calculate lower bounds, which places the bulk of the computational effort on the parallelized function evaluations. The utility of this parallelization is demonstrated via several literature examples and a case study on identifying catalytic reaction pathways, and numerical results are presented from the implementation of this work as a software extension to the EAGO software package in Julia [12–15].

[1] Adam B. Singer, James W. Taylor, Paul I. Barton, and William H. Green. Global dynamic optimization for parameter estimation in chemical kinetics. The Journal of Physical Chemistry A, 110(3):971–976, dec 2006. doi: 10.1021/jp0548873.

[2] D. Limon, T. Alamo, E.F. Camacho, and J.M. Bravo. Robust MPC of constrained nonlinear systems based on interval arithmetic. IEE Proceedings - Control Theory and Applications, 152(3):325–332, may 2005. doi: 10.1049/ip-cta:20040480.

[3] Adam B. Singer and Paul I. Barton. Global optimization with nonlinear ordinary differential equations. Journal of Global Optimization, 34(2):159–190, feb 2006. doi: 10.1007/s10898-005-7074-4.

[4] Joseph K. Scott and Paul I. Barton. Improved relaxations for the parametric solutions of ODEs using differential inequalities. Journal of Global Optimization, 57(1):143–176, may 2013. doi: 10.1007/s10898-012-9909-0.

[5] C.S. Adjiman, S. Dallwig, C.A. Floudas, and A. Neumaier. A global optimization method, αBB, for general twice-differentiable constrained NLPs — I. theoretical advances. Computers & Chemical Engineering, 22(9):1137–1158, aug 1998. doi: 10.1016/s0098-1354(98)00027-1.

[6] Adam B. Singer. Global Dynamic Optimization. PhD thesis, Massachusetts Institute of Technology, 2004.

[7] Adam B. Singer and Paul I. Barton. Global solution of optimization problems with parameter embedded linear dynamic systems. Journal of Optimization Theory and Applications, 121(3):613–646, jun 2004. doi: 10.1023/b:jota.0000037606.79050.a7.

[8] Garth P. McCormick. Computability of global solutions to factorable nonconvex programs: Part I — convex underestimating problems. Mathematical Programming, 10(1): 147–175, dec 1976. doi: 10.1007/bf01580665.

[9] Alexander Mitsos, Benoît Chachuat, and Paul I. Barton. McCormick-based relaxations of algorithms. SIAM Journal on Optimization, 20(2):573–601, jan 2009. doi: 10.1137/080717341.

[10] Ali M. Sahlodin and Benoît Chachuat. Convex/concave relaxations of parametric ODEs using taylor models. Computers & Chemical Engineering, 35(5):844–857, may 2011. doi: 10.1016/j.compchemeng.2011.01.031.

[11] Yingkai Song, Huiyi Cao, Chiral Mehta, and Kamil A. Khan. Bounding convex relaxations of process models from below by tractable black-box sampling. Computers & Chemical Engineering, 153:107413, oct 2021. doi: 10.1016/j.compchemeng.2021.107413.

[12] Jeff Bezanson, Alan Edelman, Stefan Karpinski, and Viral B. Shah. Julia: A fresh approach to numerical computing, 2014.

[13] Matthew E. Wilhelm, Robert X. Gottlieb, and Matthew D. Stuber. PSORLab/McCormick.jl, 2020. URL: https://github.com/PSORLab/McCormick.jl.

[14] Matthew E. Wilhelm and Matthew D. Stuber. Easy advanced global optimization (EAGO): An open-source platform for robust and global optimization in Julia. In AIChE Annual Meeting. AIChE, 2017.

[15] Matthew E. Wilhelm and Matthew D. Stuber. EAGO.jl: easy advanced global optimization in Julia. Optimization Methods and Software, pages 1–26, aug 2020. doi: 10.1080/10556788.2020.1786566.