(441b) A Global Optimization Algorithm for Nonconvex Chance-Constrained Programs with Continuous Random Variables | AIChE

(441b) A Global Optimization Algorithm for Nonconvex Chance-Constrained Programs with Continuous Random Variables

Authors 

Shao, Y. - Presenter, Georgia Institute of Technology
Scott, J., Georgia Institute of Technology
A new algorithm is presented for globally solving nonconvex stochastic optimization problems subject to general chance constraints. Stochastic optimization is a powerful framework for determining optimal design and operational decisions under uncertainty, and is extensively used in manufacturing, chemical processing, and power distribution, to name only a few. In this framework, chance constraints refer to constraints that are required to hold with a specified (typically high) probability, but not with certainty. Chance constraints are often essential for obtaining sensible optimization results for problems with large uncertainties, specifically in cases where it would be impossible or inordinately costly to ensure that a constraint could be satisfied in every possible realization of uncertainty, no matter how unlikely (e.g., committing enough power generation units to satisfy demand under every possible realization of random demands, renewable power outputs, contingency scenarios, etc.) However, effective methods for rigorously solving chance constrained programs are currently limited to special cases where equivalent deterministic constraints can be derived, such as problems with independent linear chance constraints affected by random variables with simple (e.g., Gaussian) distributions. A number of other approaches are available for deriving and solving deterministic restrictions of the original problem by replacing the chance constraints with more conservative deterministic constraints, often derived from probabilistic inequalities such as the Cantelli-Chebyshev inequality. However, such constraints are often significantly more restrictive than the original chance constraints, and no general-purpose approaches are available for refining these approximations. Finally, chance constraints are commonly approximated using sample-based methods, but this often requires many samples to achieve high accuracy. Moreover, without further approximations, sample-based approximations of chance constraints are discontinuous with respect to the optimization variables, which can greatly hinder efficient optimization.

In this talk, we propose a new branch-and-bound (B&B) algorithm for solving nonconvex chance-constrained stochastic optimization problems to epsilon-global optimality. Specifically, we consider chance constraints that may be nonconvex with respect to both the decision variables and the random variables, and may depend on random variables described by a very general class of probability density functions. We assume that the objective function and any other constraints are written as expected-values of other potentially nonconvex functions over continuous random variables. However, we do not consider robust constraints or recourse decisions in our formulation. The core of our B&B algorithm is a novel method for generating tractable convex and concave relaxations of nonconvex chance constraints by an extension of the Jenson-McCormick relaxation technique for expected-value functions presented in Shao and Scott, AIChE J., 2018 (10.1002/aic.16064). In contrast to other relaxations and restrictions of chance constraints in common use, our relaxations can be made to converge to the original constraints by partitioning the uncertainty space, which enables finite convergence of the overall B&B algorithm. Moreover, our relaxations provide rigorous, deterministic bounds on the original chance constraints (as well as the objective and expected-value constraints) which enables regions of the decision space to be fathomed with certainty, even when using a course partition of the uncertainty space. Thus, the uncertainty partition can be refined dynamically as the B&B search proceeds, leading to significant computational savings relative to sample-based approaches, which must use a fixed number of samples throughout the search. The advantages of our new B&B approach will be demonstrated through case studies in chemical reactor design and renewable energy systems.