(56e) Convergence Rate Analysis for Schemes of Relaxations in Decomposition Methods for Global Nonconvex Stochastic Optimization | AIChE

(56e) Convergence Rate Analysis for Schemes of Relaxations in Decomposition Methods for Global Nonconvex Stochastic Optimization

Authors 

Scott, J. - Presenter, Clemson University
Robertson, D. L., Georgia Institute of Technology
Cheng, P., Georgia Institute of Technology
This presentation describes a framework for analyzing the convergence rate of convex relaxations used in decomposition-based global optimization techniques for nonconvex stochastic programs and provides a detailed convergence analysis of two existing techniques. Stochastic programming is a powerful framework for modeling decision-making under uncertainty and is highly advantageous, e.g., in planning and process control, where much information is not known with certainty when decisions are made. However, stochastic programming models often involve a very large number of variables and constraints. For nonconvex models, this makes it prohibitively expensive to find global solutions, at least by direct application of standard global optimization algorithms. As a result, there is significant interest in decomposition-based techniques that can transform a stochastic programming problem into a sequence of smaller subproblems that can be solved more efficiently and largely in parallel, while still providing a rigorous guarantee of global optimality.

There are two existing methods that can achieve such a decomposition for general mixed-integer nonconvex stochastic programs. Both methods operate by applying spatial-branch-and-bound (B&B) in the space of the first-stage variables only, and use various decomposition-based techniques within each B&B node to obtain lower bounds on the problem projected into this space. Reference 1 computes a lower bound in each B&B node by first copying the first-stage variables for each scenario, and then relaxing the non-anticipativity constraints. The resulting lower-bounding problem is easily decomposed over the scenarios, leading to subproblems that can be solved globally and independently, and the resulting optimal values are added together to get the final lower bound. Reference 2 presents a significantly different lower bounding technique. They take linear cuts derived from Lagrangian subproblems (which are solved globally and independently for each scenario) and combine them with Benders cuts to form a relaxed master problem, the solution of which gives the required lower bound.

In order to better understand and potentially improve these methods, it is of interest to investigate the accuracy of these two lower bounding schemes. In spatial-B&B algorithms more generally, several recent developments have begun to provide a rigorous link between B&B performance and the strength of the convex relaxations used in the lower-bounding problem. Reference 3 first introduced they key concept of convergence order for schemes of relaxations of a function and established a second-order convergence property for several common relaxation techniques. In Ref. 4, this concept was extended to characterize the convergence order of B&B lower-bounding problems. Finally, in Ref. 5 and Ref. 4, it was shown that these notions of convergence order effectively determine whether or not a spatial-B&B algorithm will suffer from the cluster problem, which is the cause of exponential run-time scaling in practice.

This talk will present a new framework for assessing the performance of decomposition-based global stochastic optimization methods by viewing them in terms of schemes of relaxations and/or lower bounding problems in the space of the first-stage variables and analyzing their convergence order. We will then use this framework to analyze the convergence rates of the decomposition methods of Ref. 1 and Ref. 2 in detail. Our key findings indicate that Ref 1’s algorithm is first-order convergent, while second-order convergence is generally necessary to avoid the cluster problem, particularly at unconstrained minima. In contrast, Ref 2’s method can achieve second-order convergence when optimal Lagrange multipliers are obtained. However, this requires smoothness of the first-stage optimal value function, which is only guaranteed to be lower semi-continuous under standard assumptions, and this alludes to a larger issue in the projection-based decomposition approach. Finally, we will present empirical results verifying our theoretical convergence rate results and demonstrating their practical importance.

Listed References:

1. Cao Y, and Zavala V M, “A scalable global optimization algorithm for stochastic nonlinear programs” Journal of Global Optimization, 75 pp. 393–416, (2019).

2. Li C, and Grossmann I E, “A generalized Benders decomposition-based branch and cut algorithm for two-stage stochastic programs with non convex constraints and mixed-binary first and second stage variables” Journal of Global Optimization, 75 pp. 247–272, (2019).

3. Bompadre A, and Mitsos A, “Convergence rate of McCormick relaxations” Journal of Global Optimization, 52 pp. 1–28, (2012).

4. Kannan R, and Barton P I, “The cluster problem in constrained global optimization” Journal of Global Optimization, 69 pp. 629–676, (2017).

5. Wechsung A, and Schaber S D, and Barton P I, “The cluster problem revisited” Journal of Global Optimization, 58 pp. 429–438, (2014).