(681g) Model Predictive Control with Active Learning Under Model Uncertainty | AIChE

(681g) Model Predictive Control with Active Learning Under Model Uncertainty

Authors 

Heirung, T. A. N. - Presenter, University of California - Berkeley
Mesbah, A., University of California, Berkeley
Optimal control relies on a model, which is generally uncertain as a result of incomplete knowledge of the system as well as possible changes in the dynamics over time. The most widely used approach to optimal control of multivariable systems with state and input constraints is model predictive control, or MPC [1,2]. Typical sources of uncertainty in a prediction model used in MPC include inaccurate estimates of model parameters and unknown aspect of the model structure itself, as in the case of an unknown kinetic mechanism in a physics-based model, or the appropriate order of a data-driven model. The uncertainty in the model structure may be further increased if the system also undergoes abrupt changes (such as faults in actuators and sensors [3]).

Feedback, through its corrective nature, ensures a certain degree of robustness to uncertainty in MPC, but control performance will degrade when feedback alone is inadequate in compensating for incomplete knowledge of the system. As a consequence, problems such as large offsets in setpoint tracking, excessive constraint violations, and even instability may arise. This has led to the development of robust and stochastic MPC, or RMPC and SMPC [4]. While these approaches both systematically account for model uncertainty, their control performance is highly dependent on the accuracy and precision of the uncertainty descriptions, which is generally not updated in real time. Adaptive control involves continuously improving the model through adjusting it under closed-loop control [5]. However, the data generated in closed loop must be sufficiently informative for the model adaptation to be beneficial and to avoid problems such as bursting and the loss of controllability. In optimal control of systems with reducible model uncertainty the inputs must have a probing effect that generates informative closed-loop data for model adaptation through active learning, in addition to their directing effect to control the system state [6].

We here discuss the problem of MPC with active learning for systems with probabilistic uncertainty descriptions [7]. Through illustrative case studies [8], we demonstrate the potential ability of active learning to maintain MPC performance in the presence of model uncertainty. The first case study involves a continuously-stirred tank reactor with uncertain reaction parameters in the model. A change in the process, represented by a drop in a reaction constant, leads to significant performance loss with three standard approaches to MPC. Adding active learning to the controller mitigates most of the performance loss and enables quick recovery of the productivity. The second case study considers a continuous bioreactor, in which a change in the growth conditions can be captured by a structural change in the model. Compared with nominal and passively adaptive MPC, the addition of active learning here enables quick reduction of model-structure uncertainty through high-confidence selection of the growth model corresponding to the new conditions. These simulation results show that active learning can be highly beneficial when a system undergoes abrupt changes, including the sudden occurrence of a fault or malfunction, that can compromise operational safety, reliability, and profitability.

References

[1] D.Q. Mayne, J.B. Rawlings, C.V. Rao, and P.O.M. Scokaert, “Constrained model predictive control: Stability and optimality,” Automatica, vol. 36, no. 6, pp. 789–814, 2000.

[2] S.J. Qin and T.A. Badgwell, “A survey of industrial model predictive control technology,” Control Engineering Practice, vol. 11, no. 7, pp. 733–764, 2003.

[3] M. Blanke, M. Kinnaert, J. Lunze, and M. Staroswiecki, Diagnosis and Fault-Tolerant Control. Berlin, Germany: Springer, 2nd ed., 2006.

[4] B. Kouvaritakis and M. Cannon, Model Predictive Control: Classical, Robust and Stochastic. London: Springer, 2016.

[5] K.J. Åström and B. Wittenmark, Adaptive Control. Reading: Addison-Wesley, 2nd ed., 1995.

[6] A.A. Feldbaum, “Dual-control theory. I,” Automation and Remote Control, vol. 21, no. 9, pp. 874–880, 1961.

[7] A. Mesbah, “Stochastic model predictive control with active uncertainty learning: A survey on dual control,” Annual Reviews in Control, 2017.

[8] T.A.N. Heirung, J.A. Paulson, S.J. Lee, and A. Mesbah, “Model predictive control with active learning under model uncertainty: why, when, and how,” AIChE Journal, 2018.