(654a) A Fast and Efficient Computational Framework for Large-Scale Nonlinear Model Predictive Control
AIChE Annual Meeting
2006
2006 Annual Meeting
Computing and Systems Technology Division
Advances in Nonlinear Control
Friday, November 17, 2006 - 12:30pm to 12:50pm
There is a strong economic incentive in the development and application of efficient process monitoring and optimizing control strategies embedding nonlinear rigorous dynamic process models.
The dynamic behavior of chemical and biological processes is described by large sets of differential algebraic equations (DAE). Typical rigorous models are capable to predict the evolution of distributed or lumped multi-component systems with complex kinetic mechanisms, thermodynamics and transport phenomena occuring at different scales. These models comprise thousands of stiff and highly nonlinear DAEs which solution is extremely expensive. Once models are constructed, it is necessary to estimate a huge number of parameters making use of the dynamic rigorous process model and of scarce and sometimes non-informative industrial data. Furthermore, since the model is expected to provide robust and accurate predictions, it is necessary to estimate the model parameters with tight confidence regions by solving complex multi-set parameter estimation problems.
Coupled to the off-line estimation of the model parameters, for on-line implementations, it is necessary to cope with the presence of unmeasured disturbances and uncertain phenomena so as to minimize plant-model mismatch and to obtain accurate predictions of the evolving state of the system. Following this reasoning, it is a natural motivation to use these accurate predictions to design and implement optimal operation and control strategies. The explicit handling of constraints on the control and state variables, the choice of suitable performance or economic objectives and implicit robustness are among the most important features behind control strategies such as nonlinear model predictive control (NMPC) that, in particular, has received considerably attention over the last years.
Both the estimation and control tasks are usually implemented using moving horizons formulations in which a large-scale and complicated dynamic (DAE-constrained) optimization problem needs to be solved continuously on-line at short sampling times. The practical on-line feasibility of these powerful techniques depends on three fundamental concepts which are addressed in this work:
1) For the on-line solution of the associated dynamic optimization problems, the continuous and expensive solution of the large-scale model needs to be avoided as much as possible, since it represents the main computational bottleneck. Efficient handling of unstable modes and path constraints is fundamental as well. For these reasons, in this work we consider a simultaneous approach based on the full discretization of the associated DAE-constrained optimization problems, giving rise to large but structured nonlinear programming (NLP) problems. Since the large-scale model is now embedded as algebraic constraints on the NLP, it needs only to be solved once, at the solution of the NLP.
2) For the large-scale NLPs considered, the model complexity is now directed to a very large-scale and sparse linear system solved at every iteration inside the optimization algorithm. Although many large applications (NLPs with several hundred thousands of variables and constraints) can be solved efficiently using this approach, for larger and more challenging applications it is required to overcome the complexity of the linear system through efficient decomposition strategies able to exploit the structure of the problem. This approach enables the solution of previously intractable problems with standard computational resources and significant speed ups can be obtained under parallel architectures. It is an important observation that, in moving horizon formulations, the resulting NLPs solved at different sampling times have exactly the same structure. This immediately asks to consider information reuse (exact derivatives and profiles) obtained from the solutions of previous NLPs for initialization and computation of suboptimal but feasible solutions of subsequent problems, improving the real-time feasibility of the approach.
3) The use of equation-oriented modeling platforms equipped with automatic differentiation tools and interfaced to robust and efficient large-scale nonlinear programming algorithms have made possible the solution of challenging applications. Finally, the flexibility of the optimization algorithm implementation allows the relatively straightforward implementation of novel decomposition strategies. In this work we present the large-scale nonlinear interior point algorithm IPOPT 3.0 as a robust and flexible algorithm for the solution of these challenging NLP problems.
Finally, the practical applicability and potential of the presented computational framework is assessed on a large-scale industrial application. The dynamic process model considered is comprised of around 10,000 ordinary differential equations and 60,000 algebraic equations. Numerical issues, implementation details and preliminary results pointing towards the on-line feasibility of the approach are discussed.