(674e) Sample-Efficient High-Dimensional MPC Tuning Using Bayesian Optimization over Sparse Axis-Aligned Subspaces | AIChE

(674e) Sample-Efficient High-Dimensional MPC Tuning Using Bayesian Optimization over Sparse Axis-Aligned Subspaces

Authors 

Mesbah, A., University of California, Berkeley
Paulson, J., The Ohio State University
Huynh, M., University of California, Berkeley
Model predictive control (MPC) is a powerful technology which has been successfully applied in a wide variety of complex engineering applications including biomedical, sustainability and chemical process systems [1]. The closed-loop performance of MPC strongly depends on several factors such as its prediction model, choice of real-time optimization routine, selection of MPC hyperparameters, etc. Practical deployment of MPC requires careful tuning of these parameters since manual tuning can be time-consuming, labor-intensive, and even subjective, relying on trial-and-error or heuristic approaches. Moreover, the inherent complexity of MPC problems, coupled with nonlinear and uncertain nature of real-world systems, poses significant challenges for traditional optimization techniques. The need for sample-efficient MPC tuning has recently led to the use of Bayesian optimization (BO), which is a powerful framework designed for noisy, expensive-to-evaluate functions [2], [3]. Although BO consistently achieves strong performance for tuning problems involving a small number of parameters, its application in high dimensional settings remains a significant challenge. Previous works have circumvented this challenge by assuming some a priori knowledge which allows practitioners to identify a subset of key parameters before running the optimization procedure. This cannot be relied upon during the early phase of tuning wherein, there can easily be tens to hundreds of total parameters whose individual impact on the controller performance is unknown.

Gaussian process (GP) surrogate models defined on sparse axis-aligned subspaces (SAAS) is a model class capable of striking a balance between flexibility and parsimony in low-data regime [4]. The key assumption behind the SAAS-GP model is that only a small number of features have a strong impact on the objective function, which we conjecture hold in many task-specific MPC tuning problems. Unlike the standard GPs, which are fit using maximum likelihood estimation (MLE), the SAAS approach assumes a fully Bayesian perspective of the hyperparameter prior, which posits that each hyper-parameter belongs to a family of distribution. For practical purposes, the prior must have a strong sparsifying effect on the input features, such that a reasonable model can be interfered with limited data. The SAAS prior creates a sparse structure on the inverse length scales by restricting it to half-Cauchy distribution and enforces that the density of the distribution concentrates around zero, due to which most dimensions are “turned-off” initially. As observations are gathered, dimensions are unlocked based on sufficient evidence. In this work, we apply a BO framework based on SAAS-GP models for MPC tuning applications. We demonstrate the practical effectiveness of the proposed approach on a challenging hierarchical MPC design problem [5] with more than 20 tuning parameters. Our results show that, by exploiting SAAS-GPs, BO achieves an order of magnitude improvement in the best-identified tuning parameters compared to traditional GP models. Furthermore, we investigate the ability of the SAAS-GP to differentiate between the critical and unimportant parameters directly from performance data. Such insights provide useful information to designers that can be exploited in future and/or related tuning tasks.

References:

[1] J. Rawlings, D. Mayne, and M. Deihl, Model Predictive Control: Theory, Computation, and Design.

[2] J. A. Paulson, F. Sorourifar, and A. Mesbah, “A Tutorial on Derivative-Free Policy Learning Methods for Interpretable Controller Representations.”

[3] F. Sorourifar, G. Makrygirgos, A. Mesbah, and J. A. Paulson, “A Data-Driven Automatic Tuning Method for MPC under Uncertainty using Constrained Bayesian Optimization,” Nov. 2020, [Online]. Available: http://arxiv.org/abs/2011.11841

[4] D. Eriksson and M. Jankowiak, “High-Dimensional Bayesian Optimization with Sparse Axis-Aligned Subspaces.”

[5] D. Piga, M. Forgione, S. Formentin, and A. Bemporad, “Performance-oriented model learning for data-driven MPC design,” Apr. 2019, doi: 10.1109/LCSYS.2019.2913347.