(101g) Convergent Simulation-Optimization Using Machine-Learning, Branching, Bounding and Data-Driven Convex Underestimators
AIChE Annual Meeting
2020
2020 Virtual AIChE Annual Meeting
Computing and Systems Technology Division
Advances in Optimization I
Monday, November 16, 2020 - 9:30am to 9:45am
In previous work, we have introduced the concept of âdata-driven convex under-estimatorsâ and have adapted the structure of spatial branch-and-bound for black-box optimization (Figure 1). These under-estimators are built solely on data, and by subdividing the search space and adaptive sampling, this approach leads to upper and lower bounds of the black-box problem. Through the comparison of lower bounds and the incumbent solutions, the search space is pruned, and samples are only collected in promising spaces until convergence criteria are met.
In this work, we will be presenting several advances in the data-driven spatial branch-and-bound framework that make the algorithm more robust, accurate, scalable, and efficient. Firstly, we will present several techniques for handling both black-box and explicitly known constraints. When the constraint equations are known, the algorithm incorporates bound tightening techniques at the root node and in each subregion, which helps with pruning infeasible spaces and limiting sampling requirements. In the case of unknown (i.e., black-box) constraints, the algorithm employs several alternative techniques, including feasibility classification models and customized branching rules of search spaces based on feasibility, that speed-up the search and pruning procedure. Next, we have enhanced our library of underestimators (which was previously limited to quadratic convex under-estimators), to include bilinear and/or polynomial terms while maintaining convexity. When building underestimators the algorithm can utilize data from simulations with different accuracy (multi-fidelity modeling) and has the ability to learn from the correlation between high-and low- fidelity data. Moreover, we have enhanced our library of branching rules using techniques from the machine learning literature4 and will present the effects of different branching heuristics on the performance of the algorithm. Lastly, in order to improve the scalability with increasing dimensionality, we have incorporated techniques for decoupling the input space as well as dimensionality projection methods of the under-estimating functions. All of the novel features of the algorithm, as well as detailed quantification of the performance of the algorithm, will be presented using known benchmark libraries of problems for nonlinear programming5.
References:
- Amaran S, Sahinidis NV, Sharda B, Bury SJ. Simulation optimization: a review of algorithms and applications. 4OR. 2014;12(4):301-333.
- Rios LM, Sahinidis NV. Derivative-free optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization. 2013;56(3):1247-1293.
- Boukouvala F, Misener R, Floudas CA. Global optimization advances in Mixed-Integer Nonlinear Programming, MINLP, and Constrained Derivative-Free Optimization, CDFO. European Journal of Operational Research. 2016;252(3):701-727.
- Kotsiantis SB. Decision trees: a recent overview. Artificial Intelligence Review. 2013;39(4):261-283.
- http://www.minlp.com/nlp-and-minlp-test-problems