(434c) Fast and Convergent Nonconvex Constrained Distributed Optimization and Its Application to Distributed MPC
AIChE Annual Meeting
2020
2020 Virtual AIChE Annual Meeting
Computing and Systems Technology Division
Predictive Control and Optimization
Tuesday, November 17, 2020 - 8:30am to 8:45am
- The augmented Lagrangian with a two-layer architecture is used, which allows the distributed optimization to be performed in nested loops. In the inner iterations, the multi-block ADMM algorithm is carried out, which provably converges to the set of KKT points of an approximate problem, where the slack variables corresponding to the inter-subsystem couplings are relaxed.
- The distributed optimization steps throughout the inner iterations are solved only inexactly instead of exhaustively to a high precision. The tolerances of such approximate distributed optimization are adaptively reset as ultimately decreasing sequences depending on the coordinator information. Such tolerances can be manipulated in nonlinear programming (NLP) solvers as IPOPT [4].
- The convergence of nonconvex ADMM is improved by Anderson acceleration [5], which adopts a multi-secant, pseudo-Newton scheme to reduce the number of inner iterations. Safeguarding techniques are used in Anderson acceleration to avoid violation of the convergence guarantee.
An important application of nonconvex constrained distributed optimization is distributed nonlinear model predictive control (MPC) of large-scale interconnected process systems [6]. Using the directed and bipartite graph descriptions, we show how the distributed MPC problem can be converted into a standard form suitable for the implementation of the ELLADA algorithm. A case study on the benchmark quadruple tank process is examined, which shows that the distributed nonlinear MPC using ELLADA maintains the control performance of that of the centralized nonlinear MPC, outperforming the decentralized and other distributed MPC algorithms, while it also significantly improves the computational performance of the basic convergent algorithm.
References
[1] Daoutidis, P., et al. (2019). Optimal decomposition of control and optimization problems by network structure: Concepts, methods and inspirations from biology. AIChE J., 65(10), e16708.
[2] Boyd, S., et al. (2011). Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trend. Mach. Learn., 3, 1â122.
[3] Sun, K., & Sun, X. A. (2019). A two-level distributed algorithm for general constrained non-convex optimization with global convergence. arXiv preprint, arXiv:1902.07654.
[4] Wächter, A., & Biegler, L. T. (2006). On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Prog., 106(1), 1â31.
[5] Zhang, J., et al. (2018). Globally convergent type-I Anderson acceleration for non-smooth fixed-point iterations. arXiv preprint, arXiv:1808.03971.
[6] Christofides, P. D., et al. (2013). Distributed model predictive control: A tutorial review and future research directions. Comput. Chem. Eng., 51, 21â41.