(578d) A Unified Framework for Adjustable Robust Optimization with Endogenous Uncertainty and Active Learning | AIChE

(578d) A Unified Framework for Adjustable Robust Optimization with Endogenous Uncertainty and Active Learning

Authors 

Zhang, Q. - Presenter, University of Minnesota
Feng, W., Zhejiang University
Most commonly, data-driven optimization follows a predict-then-optimize framework in which data are used to estimate unknown parameters of an optimization model and the problem is then solved using the predicted parameter values. There is inherent uncertainty in this approach since the accuracy of the point estimates depends on the amount and quality of the data. In sequential decision-making settings, such as in process control and scheduling, the uncertainty is reduced by acquiring new data and updating the model parameters as we go. Importantly, the data we obtain depend on the operational decisions we make. In a predict-then-optimize framework, prediction (or learning) only occurs in a passive fashion since prediction and optimization are treated separately. As a result, we may miss opportunities from obtaining other related information or data from less explored parameter spaces.

In contrast, in active learning, one actively decides which new data to acquire. When integrated in an optimization problem, the goal is to obtain new information that provides a good trade-off between exploration and exploitation. This problem has been considered under various frameworks, such as optimal design of experiments, reinforcement learning, Bayesian optimization [1], and dual control [2], to just name a few. However, most existing methods either only consider settings in which the decisions are subject to no or very simple constraints, or they assume that decisions made at one time point do not affect decisions at a future time point. These limitations severely restrict the application of those methods to problems that are highly constrained and complex and where exploration actions are often expensive.

In this work, we highlight the connection between active learning and optimization under endogenous uncertainty [3, 4], which becomes obvious once we interpret learning as a means of uncertainty reduction. We realize that this only constitutes one specific type of endogenous uncertainty and that a much more powerful modeling tool could be developed if we could incorporate all common types of endogenous uncertainty (the literature refers to type 1 and type 2) within one unified framework. To that end, we introduce a new, refined classification of endogenous uncertainty and provide a set-based adjustable robust optimization perspective that allows us to conceptualize all defined types of endogenous uncertainty. We consider decision-dependent polyhedral uncertainty sets and propose a multistage decision rule approach that incorporates both continuous and binary recourse, including recourse decisions that affect the uncertainty set. The proposed method further enables the appropriate modeling of decision-dependent nonanticipativity and a tractable reformulation of the multistage robust optimization problem. We demonstrate the effectiveness of this approach in multiple computational case studies, with applications in plant redesign and production scheduling. The results show the significant benefit from proper modeling of endogenous uncertainty and active learning.

References

[1] B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. De Freitas, “Taking the human out of the loop: A review of Bayesian optimization,” Proc. IEEE, vol. 104, no. 1, pp. 148–175, 2016.

[2] T. A. N. Heirung, B. E. Ydstie, and B. Foss, “Dual adaptive model predictive control,” Automatica, vol. 80, pp. 340–348, 2017.

[3] R. M. Apap and I. E. Grossmann, “Models and computational strategies for multistage stochastic programming under endogenous and exogenous uncertainties,” Comput. Chem. Eng., vol. 103, pp. 233–274, 2017.

[4] N. H. Lappas and C. E. Gounaris, “Robust optimization for decision-making under endogenous uncertainty,” Comput. Chem. Eng., vol. 111, pp. 252–266, 2018.