(150a) Near-Optimal Output Feedback Control of Dynamic Processes | AIChE

(150a) Near-Optimal Output Feedback Control of Dynamic Processes

Authors 

Dahl-Olsen, H. - Presenter, Norwegian University of Science and Technology
Narasimhan, S. - Presenter, Norwegian University of Science and Technology

Optimal operation of
chemical processes can in general be formulated as a dynamic
optimization problem. For many economic optimization problems a quasi
steady-state description is acceptable and steady-state models may be
used. For example, this is the case with most real-time optimization
implementations. Other processes though, are transient in nature, and
the dynamic behavior must be considered (Schlegel et. al., 2005).
Such transient processes may be start-up and shut-down of continuous
plants, grade changes or batch operations. The solution of such a
problem is discontinuous in nature and consists of a set of arcs
(Srinivasan et. al., 2003a)

.

The optimal solution
should not be implemented in an open-loop manner in most cases
because of uncertainty and unknown disturbances. It may be assumed
that for small disturbances, the structure of the optimal solution
stays the same. With structure we mean the number of arcs and their
qualitative shapes. The optimal solution is often very sensitive to
the switching times. An open-loop implementation with time-controlled
switches between the arcs may easily lead to infeasibility or severe
loss with regard to the true optimal solution. Two paradigms exist
for implementation of near optimal control:

  1. on-line
    optimization with feedback from measurements mainly used to update
    the model (MPC)

  2. off-line
    optimization and on-line implementation using feedback to track the
    optimal properties of the solution ("self-optimizing control")

We follow the second
option here (Narasimhan and Skogestad, 2007), and concentrate on
obtaining properties of the optimal solution that ensure near-optimal
control in spite of disturbances.

For steady state
processes, the maximum gain rule is a good screening method for
finding promising controlled variables within the self-optimizing
control paradigm (Skogestad and Postlethwaite, 2005). The maximum
gain rule has previously only been available for steady-state
processes with linear process descriptions. This work suggests an
extension to a general unsteady-state system where the operating
policy is dynamic in nature, as found in batch unit operations in the
chemical process industries.

The idea is that the
performance of the plant can be described (locally) by a convex
objective function. This function can then be approximated in a
region around the nominally optimal operating region by a quadratic
Taylor polynomial. This can be done also for the Hamiltonian in 
nonlinear dynamic systems by expansion around the nominal
trajectories. For the selection problem it is assumed that the inputs
and states are unconstrained, because for optimal operation the
active constraints should be controlled first; then self-optimizing
control can be used to assign objectives to the remaining degrees of
freedom. In self-optimizing control for static systems, the loss is
defined as the difference between the true value of the objective
function and the value when the optimal input is applied for a given
disturbance; this is a positive number. In dynamic systems, the loss
is a function of time, and is defined as the difference between the
value of the Hamiltonian function for the implemented input and the
optimal value. The Hamiltonian takes a constant value along the
optimal trajectory for problems where time is not explicitly
occurring in the objective functional (Naidu, 2002).

For each arc in the
open-loop solution of the dynamic optimization problem it is assumed
that there is a corresponding self-optimizing control structure. This
means, for each arc there should be as many self-optimizing variables
available to control as there are remaining degrees of freedom when
the active constraints have been taken care of. It is shown that
using the theory of neighboring optimal control and the LQ regulator
problem for time-varying systems, it is possible to determine a
time-varying gain-matrix M which relates measurements and
inputs. It follows from simple analysis of the Taylor expansion of
the Hamiltonian that optimal variable selection minimizes the norm of
a time-varying matrix incorporating the sensitivity of the
Hamiltonian to measurement variations as well as the time-varying
process gains. Because there should only be one self-optimizing
control structure for each arch the minimization can be done over the
time-averaged 2-norm of the scaled time-varying gain matrix, where
optimization and implementation errors in the sense of (Skogestad and
Postlethwaite, 2005) have been included through scaling.

The method for
variable identification will be demonstrated on a fed-batch
bioreactor problem with four states, where the objective is to
maximize the product concentration at the end of the batch. The same
example has been used in the context of NCO tracking (Srinivasan et.
al., 2003b).

References

Martin Schlegel,
Klaus Stockmann, Thomas Binder and Wolfgang Marquardt (2005): Dynamic
optimization using adaptive control vector parameterization
.
Comp. Chem. Eng. (29) pp. 1731-1751.

B. Srinivasan, D.
Bonvin, and S. Palanki (2003a): Dynamic optimization of batch
processes I. Characterization of the nominal solution
. Comp.
Chem. Eng. (27) pp. 27-4

Sigurd Skogestad and
Ian Postlethwaite (2005): Multivariable feedback control - analysis
and design, Wiley & Son Ltd., Leicester.

Desineni Subbaram
Naidu (2002): Optimal control systems, CRC Press, Boca Raton.

Sridharakumar
Narasimhan and Sigurd Skogestad (2007): Implementation of optimal
operation using off-line computations, DYCOPS07 (submitted).

B. Srinivasan, D.
Bonvin, E. Visser and S. Palanki (2003b): Dynamic optimization of
batch processes II. Role of measurements in handling uncertainty
.
Comp. Chem. Eng. (27) pp. 27-44