(483c) Constrained Robust Bayesian Optimization of Expensive Black-Box Functions Under Uncertainty
AIChE Annual Meeting
2022
2022 Annual Meeting
Computing and Systems Technology Division
Design and Operations under Uncertainty - II
Wednesday, November 16, 2022 - 1:06pm to 1:24pm
Bayesian optimization (BO) has become a popular surrogate-based DFO method in recent years due to its data efficiency. By constructing a Gaussian process (GP) model of the objective and constraint functions, BO can sequentially decide where next to sample the expensive functions in a way that tradeoffs between exploration (where the uncertainty in the prediction is large) and exploitation (where the mean function is predicted to have good performance). Although BO has been successfully applied to many different nominal problems, it has proved difficult to extend BO to the robust optimization setting due to the minimax problem requiring two competing optimization stages. The MiMaReK algorithm [5] is one of the earliest attempts to develop a robust BO strategy, which uses a two-level expected improvement acquisition function that requires a growing number of samples at each iteration. Thus, MiMaReK has limited applicability for very expensive functions. Adversarially robust BO (ARBO) [6][7] is a more recent attempt that overcomes this limitation by simultaneously modelling the effect of the design variables and uncertainties on the objective function. ARBO only requires a single expensive function evaluation at every iteration, which can potentially result in a drastic reduction in the number of function evaluations needed to achieve convergence. Furthermore, ARBO was able to provide a bound on the number of iterations needed to achieve convergence by exploiting recent theory developed based in terms of the lower and upper confidence bounds of the GP. Although powerful, ARBO does not directly handle worst-case constraints, which are important in a wide-variety of safety-critical applications in engineering and beyond.
In this work, we propose a constrained extension of ARBO, which we refer to as constrained ARBO (CARBO), that is applicable to constrained robust optimization problems whose objective and/or constraint functions are defined in terms of expensive, black-box functions. CARBO builds on recent work from our group on exploiting exact penalty functions to develop convergent constrained BO algorithms [8]. To prove convergences in the robust constrained optimization context, we need to introduce the notion of robust penalty-based regret, which measures the difference in quality between our recommended point and (unknown) globally optimal robust solution. We will prove that the cumulative robust penalty-based regret is a sub-linearly decaying function of the total number of iterations, which implies there exists a (finite) iteration such that our recommended point is arbitrarily close to the constrained robust global solution. We demonstrate CARBOâs performance on multiple test problems including a high-fidelity simulation of a complex bioreactor [9]. We also discuss the importance of the final recommendation procedure including a new heuristic strategy for improving the quality of the estimated worst-case performance and constraint satisfaction.
References
[1] Dimitris Bertsimas, Omid Nohadani, Kwong Meng Teo, (2010) Robust Optimization for Unconstrained Simulation-Based Problems. Operations Research 58(1):161-178.
[2] Ben-Tal, Aharon, and Arkadi Nemirovski. "Robust optimizationâmethodology and applications." Mathematical programming 92.3 (2002): 453-480.
[3] Žakovi´c, S., C. Pantelides. 2000. An interior point algorithm for computing saddle points of constrained continuous minimax. Ann. Oper. Res. 99 59â77.
[4] Diehl, M., H. G. Bock, E. Kostina. 2006. An approximation technique for robust nonlinear optimization. Math. Programming, Ser. B 107 213â230.
[5] B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, N. De Freitas, Taking the human out of the loop: A review of Bayesian optimization, Proceedings of the IEEE 104 (1) (2015) 148-175.
[6] J. Marzat, E. Walter, and H. Piet-Lahanier, âA new expected improvement algorithm for continuous minimax optimization,â Journal of Global Optimization, vol. 64, no. 4, pp. 785â802, 2016.
[7] I. Bogunovic, J. Scarlett, S. Jegelka, and V. Cevher, âAdversarially robust optimization with Gaussian processes,â 2018
[8] 24. J.A. Paulson, G. Makrygiorgos, and A. Mesbah. Adversarially robust Bayesian optimization for efficient auto-tuning of generic control structures under uncertainty. AIChE Journal, e17591, 2021.
[9] C. Lu and J.A Paulson. âNo-regret Bayesian optimization with unknown equality and inequality constraints using exact penaly functionsâ, IFAC Symposium on Dynamics and Control of Process Systems, including Biosystems, 2022 (accepted).
[10] J. Chen, J. Daniell, D. Griï¬n, X. Li, and M. A. Henson, âExperimental testing of a spatiotemporal metabolic model for carbon monoxide fermentation with Clostridium autoethanogenum,â Biochem. Eng. J., vol. 129, pp. 64â73, 2018.