(508e) Experiments with Distributed Optimization for Non-Convex Optimization Problems | AIChE

(508e) Experiments with Distributed Optimization for Non-Convex Optimization Problems

Authors 

Kameswaran, S. - Presenter, ExxonMobil Research and Engineering Company
Harwood, S. - Presenter, ExxonMobil Research and Engineering Company

We focus on distributed optimization for a class of non-convex optimization problems that are relevant to the process systems engineering community. Distributed optimization refers to the process of solving a centralized optimization problem in a distributed fashion where a collection of “agents” cooperatively and rigorously optimize the centralized problem. It is a branch of optimization that has links to parallel computing, problem decomposition, decentralized computing, agent-based computing, etc. We focus on the following “almost” separable optimization problem. The functions in the objective function can be non-convex and are sufficiently smooth.

min sum_{i=1}^{N} f_i(x_i)

s.t. sum_{i=1}^{N} A_i x_i = b

We will make a case for why this structure is extremely relevant to process optimization and why it is important to solve these problems using distributed optimization techniques. While most of the activity in the broader optimization community is for convex optimization problems, we will talk about our experience with solving distributed non-convex optimization problems using the alternating direction method of multipliers (ADMM) [1, 2]. We will present examples and results from pooling/blending optimization, stochastic programming, and a series of 2-variable test examples that highlight the difficulties (mostly having to do with the Augmented Lagrangian parameter) associated with ADMM as well as potential advantages of doing so. ADMM methods typically have sub-linear / linear global convergence rates for convex problems. Motivated by [3], we present an algorithmic framework for distributed optimization that has good local convergence rates. We also discuss a specific instance of this framework that employs an L1 exact penalty function as a measure of progress and highlight how this is different from the classical Augmented Lagrangian-based methods.

References:

[1] J. Eckstein and D. P. Bertsekas, “On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators”, Mathematical Programming, 55, pp. 293-318, 1992

[2] S. Boyd, N. Parikh, E. Chu, B. Peleato and J. Eckstein, “Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers”, Foundations and Trends in Machine Learning, Vol. 3, No. 1, pp. 1–122, 2010

[3] B. Houska, J. Frasch, M. Diehl, “An Augmented Lagrangian Based Algorithm for Distributed Non-convex Optimization”, http://www.optimization-online.org/DB_HTML/2014/07/4427.html