(176f) Statistical Machine Learning in Model Predictive Control of Nonlinear Processes
AIChE Annual Meeting
2021
2021 Annual Meeting
Computing and Systems Technology Division
Advances in Process Control II
Monday, November 8, 2021 - 5:05pm to 5:24pm
In this work, we take advantage of statistical machine learning theory and develop a generalization error bound for RNN model that is developed for multiple-input multiple-output nonlinear systems. The generalization error bound provides an insight on how well this RNN model will behave in real processes, and also provides a guide showing how to improve its generalization performance by optimizing neural network size and data collection process. Subsequently, we incorporate the statistical machine learning model within model predictive control (MPC), for which probabilistic stability analysis is carried out to demonstrate that the nonlinear system can be stabilized at the steady-state under MPC with a certain probability provided that the RNN model satisfies the generalization error bound. Finally, a chemical process example was used to demonstrate the relationship between RNN generalization error and training error along with the dependence on network complexity and data set size. Closed-loop simulation was also carried out to demonstrate probabilistic closed-loop stability of nonlinear systems under RNN-based MPC.
References:
[1] Wu, Z., Tran, A., Rincon, D., & Christofides, P. D. (2019). Machine learningâbased predictive control of nonlinear processes. Part I: theory. AIChE Journal, 65(11), e16729.
[2] Wu, Z., Tran, A., Rincon, D., & Christofides, P. D. (2019). Machineâlearningâbased predictive control of nonlinear processes. Part II: Computational implementation. AIChE Journal, 65(11), e16734.
[3] Bartlett, P., Foster, D. J., & Telgarsky, M. (2017). Spectrally-normalized margin bounds for neural networks. arXiv preprint arXiv:1706.08498.
[4] Zou, D., & Gu, Q. (2019). An improved analysis of training over-parameterized deep neural networks. arXiv preprint arXiv:1906.04688.
[5] Golowich, N., Rakhlin, A., & Shamir, O. (2018, July). Size-independent sample complexity of neural networks. In Conference On Learning Theory (pp. 297-299). PMLR.
[6] Chen, M., Li, X., & Zhao, T. (2019). On generalization bounds of a family of recurrent neural networks. arXiv preprint arXiv:1910.12947.
[7] Hanson, J., Raginsky, M., & Sontag, E. (2020). Learning Recurrent Neural Net Models of Nonlinear Systems. arXiv preprint arXiv:2011.09573.