(101g) Convergent Simulation-Optimization Using Machine-Learning, Branching, Bounding and Data-Driven Convex Underestimators | AIChE

(101g) Convergent Simulation-Optimization Using Machine-Learning, Branching, Bounding and Data-Driven Convex Underestimators

Authors 

Zhai, J. - Presenter, Georgia Institute of Technology
Shirpurkar, S., Georgia Institute of Technology
The ability to solve optimization formulations with embedded simulations is of high interest in science and engineering communities, since it bridges the gap between high-fidelity, multiscale knowledge and quantitative decision-making1. Simulation-based optimization methods often treat the simulation as an input-output data-generating “black-box”, either because an equation-based optimization approach is intractable or the simulation is proprietary2,3. Black-box optimization methods, albeit practical, are also seen with skepticism due to uncertainty in their performance and lack of convergence guarantees, which would be provided if deterministic derivative-based optimization solvers were used. Existing global black-box optimization solvers stop when a heuristic sampling or CPU limit has been reached, and therefore, neither optimality is guaranteed, nor information about the quality of the incumbent solution is provided2,3.

In previous work, we have introduced the concept of “data-driven convex under-estimators” and have adapted the structure of spatial branch-and-bound for black-box optimization (Figure 1). These under-estimators are built solely on data, and by subdividing the search space and adaptive sampling, this approach leads to upper and lower bounds of the black-box problem. Through the comparison of lower bounds and the incumbent solutions, the search space is pruned, and samples are only collected in promising spaces until convergence criteria are met.

In this work, we will be presenting several advances in the data-driven spatial branch-and-bound framework that make the algorithm more robust, accurate, scalable, and efficient. Firstly, we will present several techniques for handling both black-box and explicitly known constraints. When the constraint equations are known, the algorithm incorporates bound tightening techniques at the root node and in each subregion, which helps with pruning infeasible spaces and limiting sampling requirements. In the case of unknown (i.e., black-box) constraints, the algorithm employs several alternative techniques, including feasibility classification models and customized branching rules of search spaces based on feasibility, that speed-up the search and pruning procedure. Next, we have enhanced our library of underestimators (which was previously limited to quadratic convex under-estimators), to include bilinear and/or polynomial terms while maintaining convexity. When building underestimators the algorithm can utilize data from simulations with different accuracy (multi-fidelity modeling) and has the ability to learn from the correlation between high-and low- fidelity data. Moreover, we have enhanced our library of branching rules using techniques from the machine learning literature4 and will present the effects of different branching heuristics on the performance of the algorithm. Lastly, in order to improve the scalability with increasing dimensionality, we have incorporated techniques for decoupling the input space as well as dimensionality projection methods of the under-estimating functions. All of the novel features of the algorithm, as well as detailed quantification of the performance of the algorithm, will be presented using known benchmark libraries of problems for nonlinear programming5.

References:

  1. Amaran S, Sahinidis NV, Sharda B, Bury SJ. Simulation optimization: a review of algorithms and applications. 4OR. 2014;12(4):301-333.
  2. Rios LM, Sahinidis NV. Derivative-free optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization. 2013;56(3):1247-1293.
  3. Boukouvala F, Misener R, Floudas CA. Global optimization advances in Mixed-Integer Nonlinear Programming, MINLP, and Constrained Derivative-Free Optimization, CDFO. European Journal of Operational Research. 2016;252(3):701-727.
  4. Kotsiantis SB. Decision trees: a recent overview. Artificial Intelligence Review. 2013;39(4):261-283.
  5. http://www.minlp.com/nlp-and-minlp-test-problems