(251b) GPU Accelerated Approximation Algorithm for Multi-Parametric Linear Programming | AIChE

(251b) GPU Accelerated Approximation Algorithm for Multi-Parametric Linear Programming

Authors 

Chen, H. - Presenter, Purdue University
Constante-Flores, G., Purdue University
Constrained optimization problems are widely found in the fields of economics, engineering and manufacturing, transportation, and health care. In practice, some of those problems need to be solved efficiently, motivated by their applications in the real world. For example, decision-making in process systems engineering frequently requires scheduling and real-time optimization decisions to be determined on a daily, or even hourly basis.

In many cases, optimization problems built for the same physical systems are solved repeatedly on an instance-by-instance basis, influenced by factors such as market fluctuations. These variability and uncertainty are typically represented by changing parameters. The parametric programming finds the optimal solution through an explicit function of the parameters, allowing efficient on-line optimization by direct functions evaluation. Thus, multi-parametric programming has been extensively explored in the model predictive control (MPC) [1-5], where decisions are made every few minutes or seconds. However, a significant limitation of multi-parametric linear programming is that the computational complexity increases exponentially with the problem size.

Over recent years, machine learning has achieved substantial success in uncovering patterns within data. This has spurred significant interest in leveraging machine learning to predict near-optimality yet feasible (or close-to-feasible) solutions efficiently [6-9]. However, existing methods mostly focus on supervised learning that heavily relies on historical instances that have been solved, which imposes great challenges on computational resources. Some pioneering works explore self-supervised methods either through constraint completion and correction [10] or the augmented Lagrangian method [11].

This work proposes a novel self-supervised and hard-constrained learning method that incorporates the proximal gradient descent algorithm into the deep learning framework. LP feasibility is guaranteed through alternating two types of projection layers in the neural network. The first type guarantees the satisfaction of linear equality constraints. The second type guarantees the satisfaction of nonnegative constraints through ReLU layers. The convergence to a feasible point is due to the alternating projection of two convex sets. Unlike the multiparametric programming algorithms that rely on the simplex algorithm, our algorithm is a first-order algorithm. The main operation in the first-order algorithm is matrix-vector products and can run entirely on a GPU. Compared with other end-to-end learning algorithms, our algorithm does not require a training dataset and thus a CPU-based LP solver is not needed.

The proposed model is demonstrated in a number of benchmarks of constrained optimization problems, such as DC optimal power flow (DCOPF) in the power system. In these numerical experiments, our model exhibits excellent performance in terms of optimality gap and solution feasibility. The model predictions are guaranteed to strictly satisfy linear constraints through projections with controllable violations, and experimental results indicate that these approximations are remarkably close to the optimal solutions.

[1] A. Bemporad, M. Morari, V. Dua, and E. N. Pistikopoulos, "The explicit linear quadratic regulator for constrained systems," Automatica, vol. 38, no. 1, pp. 3-20, 2002/01/01/ 2002, doi: https://doi.org/10.1016/S0005-1098(01)00174-1.

[2] E. N. Pistikopoulos, V. Dua, N. A. Bozinis, A. Bemporad, and M. Morari, "On-line optimization via off-line parametric optimization tools," Computers & Chemical Engineering, vol. 24, no. 2, pp. 183-188, 2000/07/15/ 2000, doi: https://doi.org/10.1016/S0098-1354(00)00510-X.

[3] V. Sakizlis, N. M.P. Kakalis, V. Dua, J. D. Perkins, and E. N. Pistikopoulos, "Design of robust model-based controllers via parametric programming," Automatica, vol. 40, no. 2, pp. 189-201, 2004/02/01/ 2004, doi: https://doi.org/10.1016/j.automatica.2003.08.011.

[4] D. Narciso, N. Faísca, K. Kouramas, and E. Pistikopoulos, "Multi-Parametric Model-Based Control: Theory and Applications, Volume 2," 2011, pp. 77-103.

[5] N. A. Diangelakis, B. Burnak, J. Katz, and E. N. Pistikopoulos, "Process design and control optimization: A simultaneous approach by multi-parametric programming," AIChE Journal, vol. 63, no. 11, pp. 4827-4846, 2017/11/01 2017, doi: https://doi.org/10.1002/aic.15825.

[6] W. Chen, S. Park, M. Tanneau, and P. Van Hentenryck, "Learning optimization proxies for large-scale Security-Constrained Economic Dispatch," Electric Power Systems Research, vol. 213, p. 108566, 2022/12/01/ 2022, doi: https://doi.org/10.1016/j.epsr.2022.108566.

[7] A. S. Zamzam and K. Baker, "Learning Optimal Solutions for Extremely Fast AC Optimal Power Flow," in 2020 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm), 11-13 Nov. 2020 2020, pp. 1-6, doi: 10.1109/SmartGridComm47815.2020.9303008.

[8] F. Fioretto, T. W. K. Mak, and P. Van Hentenryck, "Predicting AC Optimal Power Flows: Combining Deep Learning and Lagrangian Dual Methods," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 01, pp. 630-637, 04/03 2020, doi: 10.1609/aaai.v34i01.5403.

[9] J. Kotary, F. Fioretto, and P. V. Hentenryck, "Fast Approximations for Job Shop Scheduling: A Lagrangian Dual Deep Learning Method," in AAAI Conference on Artificial Intelligence, 2021.

[10] P. L. Donti, D. Rolnick, and J. Z. Kolter, "DC3: A learning method for optimization with hard constraints," ArXiv, vol. abs/2104.12225, 2021.

[11] S. Park and P. Van Hentenryck, "Self-Supervised Primal-Dual Learning for Constrained Optimization," Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 4, pp. 4052-4060, 06/26 2023, doi: 10.1609/aaai.v37i4.25520.