(10e) Learning to Accelerate the Global Solution of Quadratically-Constrained Quadratic Programs | AIChE

(10e) Learning to Accelerate the Global Solution of Quadratically-Constrained Quadratic Programs

Authors 

Deka, D., Los Alamos National Laboratory
Nagarajan, H., Los Alamos National Laboratory
Many real-world applications involve the repeated solution of the same high-level semantic nonconvex quadratically-constrained quadratic program (QCQP) with slightly varying model parameters. Examples include the pooling problem with varying input qualities and the cost-efficient operation of the power grid with varying loads and renewables. These hard optimization problems are typically solved using off-the-shelf global optimization software [1-4] that do not exploit the shared problem structure; heuristics within these implementations are engineered to work well on average over a diverse set of instances and may be suboptimal for instances from a specific application. For example, recent work has shown that tailoring branching decisions can have a huge impact on the run times of branch-and-bound algorithms for mixed-integer linear programs (MILPs) [5-8]. In contrast with the MILP literature, there is hardly any work on using machine learning to accelerate the guaranteed global solution of nonconvex nonlinear programs [9-12].

We investigate machine learning approaches for accelerating the global minimization of nonconvex QCQPs using Alpine [4]. Alpine is a Julia-based global solver that determines lower bounds on the optimal value of QCQPs using piecewise convex relaxations. At each iteration, Alpine adaptively refines the partitions of the domains of (a subset of) variables participating in nonconvex terms to determine a sequence of convergent lower bounds. It uses heuristics to specify the locations of the partitioning points for each partitioned variable and continues to refine its variable partitions until the lower and upper bounds converge. The lower bounding problem within Alpine can be formulated as an MILP, with the number of binary variables in the formulation equal to the sum of the number of partitions across the variables. Since the complexity of this lower bounding problem can grow significantly with the number of iterations, the choice of partitioning points, especially in the initial iterations, can have a huge impact on Alpine's overall performance.

We propose to learn how to optimally partition the domains of variables participating in nonconvex terms within Alpine. Our goal is to choose the partitioning points so that the resulting piecewise convex relaxation-based lower bound after Alpine's first iteration is maximized. We formulate this problem of "optimal partitioning'" as a max-min problem, where the outer-maximization chooses the partitioning points and the inner minimization solves the piecewise relaxation-based MILP lower bounding problem for a given partition. This reference partitioning problem is solved to local optimality by exploiting generalized gradient information of the value function of the inner minimization problem within a bundle solver for nonsmooth nonconvex optimization. We use this optimal partitioning strategy to generate data that is used to train a machine learning model to predict the optimal partitions for a given instance. Numerical experiments on randomly generated QCQPs and instances of the pooling problem demonstrate that using the reference optimal partitioning strategy at the first iteration can reduce Alpine's solution time by an order of magnitude on average. They also illustrate the efficacy of our machine learning models in learning this reference partitioning strategy on homogeneous instances of QCQPs. Finally, we also explore formulations to let the optimizer allocate a different number of partitioning points per variable depending on their relative impact on the lower bound.


1. Misener, R., and Floudas, C. A. (2014). ANTIGONE: algorithms for continuous/integer global optimization of nonlinear equations. Journal of Global Optimization, 59(2), 503-526.
2. Sahinidis, N. V. (1996). BARON: A general purpose global optimization software package. Journal of global optimization, 8(2), 201-205.
3. Bestuzheva, K. et al. (2021). The SCIP Optimization Suite 8.0. arXiv preprint arXiv:2112.08872.
4. Nagarajan, H., Lu, M., Wang, S., Bent, R., and Sundar, K. (2019). An adaptive, multivariate partitioning algorithm for global optimization of nonconvex programs. Journal of Global Optimization, 74(4), 639-675.
5. Lodi, A., and Zarpellon, G. (2017). On learning and branching: a survey. Top, 25(2), 207-236.
6. Gasse, M., Chételat, D., Ferroni, N., Charlin, L., and Lodi, A. (2019). Exact combinatorial optimization with graph convolutional neural networks. Advances in Neural Information Processing Systems, 32.
7. Nair, V., Bartunov, S., Gimeno, F., et al. (2020). Solving mixed integer programs using neural networks. arXiv preprint arXiv:2012.13349.
8. Bengio, Y., Lodi, A., and Prouvost, A. (2021). Machine learning for combinatorial optimization: a methodological tour d’horizon. European Journal of Operational Research, 290(2), 405-421.
9. Baltean-Lugojan, R., Bonami, P., Misener, R., and Tramontani, A. (2019). Scoring positive semidefinite cutting planes for quadratic optimization via trained neural networks. Optimization Online.
10. Lee, M., Yu, G., and Li, G. Y. (2019). Learning to branch: Accelerating resource allocation in wireless networks. IEEE Transactions on Vehicular Technology, 69(1), 958-970.
11. Shen, Y., Shi, Y., Zhang, J., and Letaief, K. B. (2019). LORM: Learning to optimize for resource management in wireless networks with few training samples. IEEE Transactions on Wireless Communications, 19(1), 665-679.
12. Ghaddar, B. et al. (2022). Learning for Spatial Branching: An Algorithm Selection Approach. Optimization Online.