(183a) On-Line Nonlinear Programming as a Generalized Equation | AIChE

(183a) On-Line Nonlinear Programming as a Generalized Equation

Authors 

Anitescu, M. - Presenter, Argonne National Laboratory


Traditional on-line nonlinear programming (NLP) strategies extend the time step (sampling time) in order to accommodate the solution of the NLP to near-optimality. A problem with this approach is that it neglects the fact that the NLP solver is implicitly tracking a time-moving solution manifold. Therefore, insisting in obtaining a high degree of accuracy can translate in long time steps and, consequently, in increasing distances between neighboring problems. As a result, the upper bound on the computational time is very conservative. This limits the application scope of on-line NLP to systems with slow dynamics.

Approximate on-line NLP strategies, on the other hand, try to minimize the time step by computing a cheap approximate solution within a fixed computational time. Since shortening the time step reduces the distance between neighboring problems, this can be exploited in order to reduce the approximation errors. These strategies are particularly attractive for systems with fast dynamics. However, an important issue is to ensure that the approximation error will remain stable.

Approximate strategies such as real-time iterations and continuation schemes have been studied previously in the context of receding-horizon control and estimation. These strategies solve a single Newton-type step at each time step. In the real-time iteration strategy reported in (Diehl, et.al. 2005), the model is used to predict the state at the next step and a perturbed quadratic programming (QP) problem is solved once the true state becomes available (shifting strategy). In the absence of active-set changes, the perturbed QP reduces to a perturbed Newton step which is obtained from the solution of a linear system. Using this observation it has been demonstrated that, by computing a single Newton step per time step, the approximation error remains bounded to second order with respect to the error between the predicted and the actual state. In order to prove this result, a discrete-time, shrinking-horizon setting is needed. A limitation of this analysis is that the impact of the size of the time step gets lost and the results cannot be applied directly in a more general setting. Furthermore, no error bounds have been provided for the case in which non-smoothness effects are present along the manifold (e.g. active-set changes).

The continuation strategy reported in (Ohtuska, 2004) is a manifold tracking scheme in which the optimality conditions of the NLP are formulated as a differential equation. This permits a detailed numerical analysis of the approximation error as a function of the size of the time step. Sufficient conditions for the stability of the tracking error are derived. However, no order results are established. For implementation, the differential equation is linearized and discretized to derive the Newton step. The linear system is solved approximately at each step using an iterative scheme such as generalized minimum-residual (GMRES). The use of an iterative linear solver is particularly attractive since it can be terminated early, as opposed to direct solvers. This can significantly reduce the computational time. A limitation of this continuation strategy is that active-set changes need to be handled in an ad-hoc manner by adding a fixed barrier or penalty function which can lead to numerical instability. This limits the practical applicability of the approach and the scope of the theoretical results.

In this work, we present a framework for the analysis of on-line NLP strategies based on generalized equation (GE) concepts. We demonstrate that if points along the solution manifold are consistently strongly regular, it is possible to track the manifold by solving a single linear complementarity problem (LCP) per time step. We derive sufficient conditions that guarantee that the approximation error remains bounded to second order with the size of the time step, even if the LCP is only solved to first order accuracy. This allows us to use iterative linear algebra schemes. The sufficient conditions and errors bounds also hold in the presence of active-set changes. We make use of these general results to construct a manifold tracking strategy in which the NLP is reformulated using an augmented Lagrangean function. This permits the use of a projected successive over-relaxation (PSOR) algorithm to solve the LCP at each sampling time. We demonstrate that PSOR is particularly attractive since it can perform linear algebra and active-set identification tasks simultaneously. We present a numerical case study to illustrate the developments.

References:

M. Diehl, H.G. Bock, and J.P. Schloder. A real-time iteration scheme for nonlinear optimization in optimal feedback control. SIAM J. Control and Optim. 43(5),1714?1736, 2005.

T. Ohtsuka. A continuation/GMRES method for fast computation of non-linear receding horizon control. Automatica., 40, 563?574, 2004.