(434c) Fast and Convergent Nonconvex Constrained Distributed Optimization and Its Application to Distributed MPC | AIChE

(434c) Fast and Convergent Nonconvex Constrained Distributed Optimization and Its Application to Distributed MPC

Authors 

Tang, W. - Presenter, University of Minnesota
Daoutidis, P., University of Minnesota-Twin Cities
Large-scale optimization problems can often benefit from a distributed solution approach. In such an approach the entire system is first decomposed into its constituent subsystems [1], each handled with a corresponding agent, and some information is exchanged among the distributed agents or between the agents and a central coordinator to seek for a solution to the monolithic problem [2]. In the recent decade, there has been extensive research on distributed optimization. However, a distributed optimization algorithm that is computationally efficient, globally convergent, amenable to nonconvex constraints and general inter-subsystem interactions remains an open problem. In a recent paper [3], a two-level augmented Lagrangian-based algorithm, which can be viewed as a modification of the alternating direction method of multipliers (ADMM), was proposed to guarantee the convergence of nonconvex constrained distributed optimization. In this work, we further develop the results of [3] into a new algorithm, called ELLADA (extra-layer augmented Lagrangian-based accelerated distributed approximate optimization), which achieves both global convergence to the set of Karush-Kuhn-Tucker (KKT) points of the monolithic problem and significantly improved computational efficiency. Specifically, the ELLADA algorithm has the following three important features:

  1. The augmented Lagrangian with a two-layer architecture is used, which allows the distributed optimization to be performed in nested loops. In the inner iterations, the multi-block ADMM algorithm is carried out, which provably converges to the set of KKT points of an approximate problem, where the slack variables corresponding to the inter-subsystem couplings are relaxed.
  2. The distributed optimization steps throughout the inner iterations are solved only inexactly instead of exhaustively to a high precision. The tolerances of such approximate distributed optimization are adaptively reset as ultimately decreasing sequences depending on the coordinator information. Such tolerances can be manipulated in nonlinear programming (NLP) solvers as IPOPT [4].
  3. The convergence of nonconvex ADMM is improved by Anderson acceleration [5], which adopts a multi-secant, pseudo-Newton scheme to reduce the number of inner iterations. Safeguarding techniques are used in Anderson acceleration to avoid violation of the convergence guarantee.

An important application of nonconvex constrained distributed optimization is distributed nonlinear model predictive control (MPC) of large-scale interconnected process systems [6]. Using the directed and bipartite graph descriptions, we show how the distributed MPC problem can be converted into a standard form suitable for the implementation of the ELLADA algorithm. A case study on the benchmark quadruple tank process is examined, which shows that the distributed nonlinear MPC using ELLADA maintains the control performance of that of the centralized nonlinear MPC, outperforming the decentralized and other distributed MPC algorithms, while it also significantly improves the computational performance of the basic convergent algorithm.

References

[1] Daoutidis, P., et al. (2019). Optimal decomposition of control and optimization problems by network structure: Concepts, methods and inspirations from biology. AIChE J., 65(10), e16708.

[2] Boyd, S., et al. (2011). Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trend. Mach. Learn., 3, 1—122.

[3] Sun, K., & Sun, X. A. (2019). A two-level distributed algorithm for general constrained non-convex optimization with global convergence. arXiv preprint, arXiv:1902.07654.

[4] Wächter, A., & Biegler, L. T. (2006). On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Prog., 106(1), 1—31.

[5] Zhang, J., et al. (2018). Globally convergent type-I Anderson acceleration for non-smooth fixed-point iterations. arXiv preprint, arXiv:1808.03971.

[6] Christofides, P. D., et al. (2013). Distributed model predictive control: A tutorial review and future research directions. Comput. Chem. Eng., 51, 21—41.