(468a) Sampling-Based Bayesian Modeling with Proper Likelihood and Prior Information | AIChE

(468a) Sampling-Based Bayesian Modeling with Proper Likelihood and Prior Information

Authors 

Chen, H. - Presenter, The Ohio State University


For decades, the process modeling field is dominated by traditional methods which think from a frequentist's perspective, such as Ordinary Least Squares (OLS), Principal Component Analysis (PCA), Partial Least Squares (PLS) etc. These methods are easy to use and computationally efficient. However, most of them are not able to use any extra information other than the data itself. Some maximum likelihood methods, such as Maximum Likelihood Principal Component Analysis (MLPCA) are able to use information about measurement noise within data, but restrictive assumptions, such as Gaussian distributions, are often been made. Also these methods treats model parameters as deterministic, therefore they are ill-suited to use any prior information about data and the process model. In contrast, Bayesian modeling methods can incorporate data likelihood and prior information in a rigorous way. A Bayesian PCA (BPCA) (Nounou et al. 2002A) and Bayesian Latent Variable Regression (BLVR) (Nounou et al. 2002B) have been developed. But these methods are optimization-based and computationally expensive. They still retain many restrictive assumptions about likelihood and prior, which make them not suitable for solving many practical problems. Other than those technical issues, a more practical challenge is that not only the prior information is hard to obtain, even the likelihood information are often not readily available for modelers. When improper parameters are used in the likelihood and prior distribution functions, the utilization of those information could be detrimental to modeling results instead of beneficial as they are supposed to be. This problem has been a discouraging factor for the practical application of Bayesian modeling.

From a Bayesian perspective, this problem can be solved by adding a layer of prior distributions for those uncertain parameters in the likelihood and prior distribution functions. This approach is commonly used in Bayesian statistics. Because those parameters are treated as stochastic variables now, their posterior distribution can be learned from the data itself. This is much better than simply assign some improper fixed values to them. But with more stochastic variables in the model, even more computation is needed and solving it by optimization routines is usually very difficult.

In the past few years, there is surge of applying Bayesian methods in process modeling and parameter estimation problems (Chen et al 2004; Coleman and Block 2006; Jitjareonchai et al. 2006). The recent momentum of Bayesian modeling methods is in large part driven by the adoption of advanced Monte Carlo sampling techniques, such as Markov Chain Monte Carlo (MCMC) (Gamerman 1997). They can greatly facilitate Bayesian computation and help release many restrictive assumptions made by optimization-based methods. A novel sampling-based BLVR (BLVR-S) (Chen et al. 2006) has already been also developed. This approach is computationally more efficient when modeling high dimensional data sets than the optimization-based BLVR. It can also easily provide uncertainty information about estimates. But the current BLVR-S approach still assumes Gaussian likelihood and prior distributions for computational convenience and it treats parameters in those distributions as deterministic. To make this approach more appealing for solving practical problems, in this work, we propose a novel Gibbs sampling-based BLVR method. In this approach, a non-informative (inverse-Gamma) prior is assumed for the variance of the measurement noise. Therefore, even if the true value of the noise variance is not known to great accuracy, it can be learned from the data during the modeling process. Thus the modeling results will not suffer from improper likelihood information. With the help the importance sampling (Chen et al. 2004) steps within Gibbs sampling, the assumption of Gaussian prior distribution can also be release in this approach. This paves way for the more proper Bayesian modeling of non-Gaussian data sets by BLVR. Benefits of this approach will be illustrated by both simulated examples and the applications to real data sets.

References:

Chen H., Bakshi B.R. and Goel P.K. (2006), Sampling-based Bayesian Latent Variable Regression, Chemical Process Control 7, Alberta, Canada.

Chen W.S., Bakshi B.R., Goel P.K. and Ungarala S. (2004), Bayesian Estimation of Unconstrained Nonlinear Dynamic Systems via Sequential Monte Carlo Sampling. Industrial and Engineering Chemical Research, 43: 4012-4025.

Coleman M.C. and Block D.E. (2006), Bayesian Parameter Estimation with Informative Priors for Nonlinear Systems. AIChE Journal, 52: 651-667.

Gamerman D. (1997), Markov Chain Monte Carlo, Chapman & Hall.

Jitjareonchai J.J., Reilly P.M., Duever T.A. and Chambers D.B. (2006), Parameter estimation in the Error-in-Variable models using the Gibbs sampler. Canadian Journal of Chemical Engineering, 84: 125-138.

Nounou M.N., Bakshi B.R., Goel P.K. and Shen X. (2002A), Bayesian Principal Component Analysis, Journal of Chemometrics, 16: 579-595.

Nounou M.N., Bakshi B.R., Goel P.K. and Shen X. (2002B), Process Modeling By Bayesian Latent Variable Regression, AIChE Journal, 48(8):1775-1793.

Checkout

This paper has an Extended Abstract file available; you must purchase the conference proceedings to access it.

Checkout

Do you already own this?

Pricing

Individuals

AIChE Pro Members $150.00
AIChE Graduate Student Members Free
AIChE Undergraduate Student Members Free
AIChE Explorer Members $225.00
Non-Members $225.00