(469e) Novel Density Based State Estimation Methods in Nonlinear Model Predictive Control | AIChE

(469e) Novel Density Based State Estimation Methods in Nonlinear Model Predictive Control

Authors 

Ungarala, S. - Presenter, Cleveland State University
Li, K. - Presenter, Carnegie Mellon University


Model predictive control (MPC) has become popular for industrial applications in recent years due to its capability to handle hard constraints and its robustness properties. Most MPC applications are at present based on linear models because of its mature theory and simplicity for solving the optimization problems online. However, the linear model is not adequate to describe a host of chemical engineering processes. In such cases, MPC based on nonlinear models needs to be applied to get satisfactory control performance.

Recent trends in MPC favor the closed-loop approach, where the measurements are incorporated into the prediction. This feature necessitates an estimator to recover the states from noisy measurements and a knowledge of a process model with uncertainty. The prediction is repeated at every time instant using the recovered states as initial conditions. Since closed-loop MPC requires the solution of the estimation and regulation problems online at each step, the computation time is limited between two successive measurements. For a linear system, the state variables at each time step are distributed as Gaussian since the initial condition, system noise and the measurement noise are all assumed to be Gaussian. Then the estimation problem can be easily solved by the Kalman Filter, and the regulation problem can be solved very fast by linear programming. However, for nonlinear systems, the computation cost becomes the biggest challenge for MPC applications. Here, both the regulation and estimation problems are generally nonlinear optimization problems, which are time-consuming to solve even for simple nonlinearities. In this work, the performance of the most popular nonlinear estimation methods are compared with two novel probability density function based filters incorporated into nonlinear MPC.

The currently used state estimation methods in MPC are the Extended Kalman Filter (EKF) (Lee and Ricker, 1994) and Moving Horizon Estimation (MHE) (Tenny et al., 2004). The EKF basically linearizes a nonlinear model repeatedly and applies a Kalman Filter on the resulting time-varying system. The estimation component of nonlinear MPC using the EKF has negligible online computational load when compared to the control signal optimization since it actually deals with a linear problem. However, the EKF assumes that the state variable follow Gaussian distributions, which is easily violated for most nonlinear systems. If the system is highly nonlinear, the performance of the EKF could be poor. The more recent development, MHE is fast becoming an alternative to the EKF due to its superior estimation properties over the EKF. MHE treats the estimation as a least square optimization problem in a moving window. However, the cost of extracting better performance is longer computation times for MHE, which is comparable to the optimization cost of regulation. Both EKF and MHE are function based state estimators in the sense that they pose optimization problems with the nonlinear (or linearized) functions of the models as constraints.

Recently, several novel probability density function based state estimators have been proposed, which do away with the requirements of Gaussianity, linearity and additive noise terms in models. The Sequential Monte Carlo (SMC) (Chen et al., 2004) or particle filters and Markov chain based Cell Filter (CF) (Ungarala and Chen, 2003) are typical examples. These estimators are extremely general, easy to implement, do not involve optimization routines and are mostly immune to divergence. While these methods are computationally more expensive than EKF, they are shown to be less demanding and easier to tune than MHE (Chen et al., 2004). The SMC algorithm uses a group of samples to represent the actual distribution of the state variables. If the number of samples are large enough, the approximation can be very close to the true distribution. At each time instant the the importance of the samples is evaluated based on their likelihood calculated from the measurement. Then the samples are re-sampled based on their importance to get the approximate distribution of the current state variables. The algorithm is an approximate implementation of the infinite dimensional Bayesian estimate. The Cell Filter is also Monte Carlo in nature, but it develops a Markov chain model of the system to propagate the probability distributions. Since the Markov chain is generated off-line, the CF exerts very little computational burden for online estimation. In this work, SMC and CF are incorporated into the nonlinear MPC framework. In a simulation study, a CSTR is controlled by nonlinear MPC closed-loop controller and the on line regulation problem is solved by nonlinear programming method. The performance is superior to an EKF based MPC with a mild increase in the computational load. It is shown to achieve the same performance of an MHE based MPC at smaller computational cost.

References: Chen W. -S., B. R. Bakshi, P. Goel and S. Ungarala, Bayesian estimation via sequential Monte Carlo sampling: unconstrained nonlinear dynamic systems, Ind. Eng. Chem. Res., 43, 4012-4025, 2004. Lee, J. H. and N. L. Ricker, Extended Kalman filter based nonlinear model predictive control, Ind. Eng. Chem. Res., 33, 1530-1541, 1994. Tenny M. J., J. B. Rawlings and S. J. Wright, Closed-loop behavior of nonlinear model predictive control, AIChE J., 50(9), 2142, 2004. Ungarala S. and Z. Z. Chen, Bayesian data rectification of nonlinear systems with Markov chains in cell space, Proc. ACC, 4857-4862, Denver, CO, 2003.