(356b) Nonlinear Convex and Concave Relaxations for the Solutions of Parametric Ordinary Differential Equations
AIChE Annual Meeting
2009
2009 Annual Meeting
Computing and Systems Technology Division
Dynamic Simulation and Optimization
Wednesday, November 11, 2009 - 8:50am to 9:10am
With the availability of efficient and reliable
dynamic simulation software, chemical processes are increasingly described by
detailed, typically nonlinear dynamic models. Moreover, it is often desirable
to optimize such dynamic models for the purpose of designing dynamic chemical processes,
controlling dynamic responses to disturbances, or fitting model parameters to
experimental data. Unfortunately, it is well known that dynamic models for
chemical processes are pathologically nonconvex and often optimization problems
with such models embedded have multiple sub-optimal local minima. Yet, in many applications
it is imperative that the global optimum is found. At the very least, there are
potentially large economic losses associated with implementing a sub-optimal
design or control strategy for a chemical process. Moreover, sub-optimal
solutions are unfit as feasibility tests for dynamic problems, where one wishes
to minimize the violation of a process constraint representing environmental
regulations or unsafe operating conditions. In parameter estimation problems,
the possibility of finding sub-optimal solutions prevents concrete conclusions
from being drawn about the validity of a model through statistical significance
tests. In spite of the need for global solutions for many dynamic optimization
problems, existing algorithms can only find guaranteed global optima for
dynamic problems of modest size with reasonable computational effort. This work
describes an improved algorithm for the global solution of optimization
problems with ordinary differential equations (ODEs) embedded, which is enabled
by a novel method for constructing convex and concave relaxations for the
solutions of these differential equations.
Procedures for constructing a convex underestimating
function (convex relaxation) for given a function are central to many
algorithms for global optimization. Since convex functions have the property
that every local minimum is a global minimum, these procedures may be used to
find a rigorous lower bound on the global solution of an optimization problem,
which may be used in conjunction with a spatial branch-and-bound algorithm to
find the global optimum by solving a sequence of convex sub-problems. For
functions which can be written explicitly in terms of binary arithmetic
operations and standard univariate functions, such as powers, exponentials and
trigonometric functions, many methods exist for constructing convex
underestimating functions. However, none of these methods are directly
applicable to dynamic optimization problems because the functions involved are
not only functions of the decision variables, but also of the state variables
described by the embedded ODEs, and the dependence of the state variables on
the decision variables is not known explicitly, except for very simple dynamic
models. Due to existing methods, the task of generating convex sub-problems
for dynamic optimization problems can be reduced to the task of deriving both
convex and concave relaxations for the state variables themselves with respect
to the decision variables. There exist a small number of methods in the
literature which can accomplish this, yet all of these are unsatisfactory
because either the resulting convex and concave functions poorly approximate
the dependence of the state variables on the decision variables, resulting in many
iterations of the branch-and-bound algorithm and long computation times, or the
relaxations themselves are expensive to compute. Furthermore, the majority of
these methods are only capable of producing affine or constant underestimators
and overestimators for the state variables, as opposed to general convex and
concave relaxations, which is a large source of weakness for these methods when
applied to sufficiently nonlinear ODEs.
A novel theory is presented to construct convex and
concave relaxations for the parametric solutions of a general system of system
ODEs. The parameters of interest may specify initial conditions and/or appear
in the right-hand side functions of the differential equations. Sufficient
conditions on the right-hand side functions and initial conditions of an
auxiliary system of ODEs are identified which ensure that the solutions of this
auxiliary system are convex and concave relaxations of the state variables of
the original system over a given interval of parameter values, for each fixed
value of the independent variable. Furthermore, a constructive procedure is
given for generating such an auxiliary system given an arbitrary system of ODEs
under standard existence and uniqueness assumptions. This procedure is based
on a novel extension of McCormick's relaxation theory, which is computationally
inexpensive and easily automatable. Finally, it is proven that using these
relaxations in conjunction with a spatial branch-and-bound procedure results in
an algorithm which is guaranteed to locate the global optimum of a dynamic
optimization problem to within any given tolerance in finitely many iterations.
This relaxation theory has several advantages over
previous efforts. First, the relaxations generated by this method are generally
nonlinear convex and concave functions as opposed to affine or constant
functions, so it is possible to generate good approximations for highly
nonlinear dynamic systems. Indeed, preliminary experiments with this method
show that the relaxations are nearly always tighter than those generated by
other methods. Moreover, these relaxations are inexpensive to evaluate as
compared to evaluating the original state variables. In particular, it requires
the numerical solution of a system of ODEs which is four times larger than the
original. Finally, it is simple to generate the required auxiliary ODE system
computationally by using the operator overloading capabilities of any object-oriented
programming language. Due to the effectiveness of this new approach, it is
expected that larger, more practical dynamic optimization problems can be
solved to global optimality using these relaxations than with previously
developed methods.