(674e) Sample-Efficient High-Dimensional MPC Tuning Using Bayesian Optimization over Sparse Axis-Aligned Subspaces
AIChE Annual Meeting
2024
2024 AIChE Annual Meeting
Computing and Systems Technology Division
10B: AI/ML Modeling, Optimization and Control Applications I
Thursday, October 31, 2024 - 1:34pm to 1:50pm
Gaussian process (GP) surrogate models defined on sparse axis-aligned subspaces (SAAS) is a model class capable of striking a balance between flexibility and parsimony in low-data regime [4]. The key assumption behind the SAAS-GP model is that only a small number of features have a strong impact on the objective function, which we conjecture hold in many task-specific MPC tuning problems. Unlike the standard GPs, which are fit using maximum likelihood estimation (MLE), the SAAS approach assumes a fully Bayesian perspective of the hyperparameter prior, which posits that each hyper-parameter belongs to a family of distribution. For practical purposes, the prior must have a strong sparsifying effect on the input features, such that a reasonable model can be interfered with limited data. The SAAS prior creates a sparse structure on the inverse length scales by restricting it to half-Cauchy distribution and enforces that the density of the distribution concentrates around zero, due to which most dimensions are âturned-offâ initially. As observations are gathered, dimensions are unlocked based on sufficient evidence. In this work, we apply a BO framework based on SAAS-GP models for MPC tuning applications. We demonstrate the practical effectiveness of the proposed approach on a challenging hierarchical MPC design problem [5] with more than 20 tuning parameters. Our results show that, by exploiting SAAS-GPs, BO achieves an order of magnitude improvement in the best-identified tuning parameters compared to traditional GP models. Furthermore, we investigate the ability of the SAAS-GP to differentiate between the critical and unimportant parameters directly from performance data. Such insights provide useful information to designers that can be exploited in future and/or related tuning tasks.
References:
[1] J. Rawlings, D. Mayne, and M. Deihl, Model Predictive Control: Theory, Computation, and Design.
[2] J. A. Paulson, F. Sorourifar, and A. Mesbah, âA Tutorial on Derivative-Free Policy Learning Methods for Interpretable Controller Representations.â
[3] F. Sorourifar, G. Makrygirgos, A. Mesbah, and J. A. Paulson, âA Data-Driven Automatic Tuning Method for MPC under Uncertainty using Constrained Bayesian Optimization,â Nov. 2020, [Online]. Available: http://arxiv.org/abs/2011.11841
[4] D. Eriksson and M. Jankowiak, âHigh-Dimensional Bayesian Optimization with Sparse Axis-Aligned Subspaces.â
[5] D. Piga, M. Forgione, S. Formentin, and A. Bemporad, âPerformance-oriented model learning for data-driven MPC design,â Apr. 2019, doi: 10.1109/LCSYS.2019.2913347.