(235c) Dynamic Real Time Optimization with Embedded Closed-Loop Lyapunov Stabilizing MPC | AIChE

(235c) Dynamic Real Time Optimization with Embedded Closed-Loop Lyapunov Stabilizing MPC

Authors 

MacKinnon, L. - Presenter, McMaster University
Swartz, C., McMaster University
Sundaresan Ramesh, P., McMaster University
Mhaskar, P., McMaster University
The effective control and optimization of chemical plants is important to ensure safe, stable, and economically competitive plant operation. In order to achieve the necessary control and to ensure reliable operation, it is common in modern plants to ascribe control and optimization functions to separate levels, with a degree of coordination between them. These layers can include (Dynamic) Real-Time Optimization (RTO), Model Predictive Control, and Proportional-Integral-Derivative (PID) control. This work will focus on the (D)RTO and MPC layers.

A key challenge in the control of chemical plants is ensuring stable operation in systems which are inherently unstable. Typically, this is accomplished with the use of stabilizing techniques at the MPC level. A review of many common techniques is provided by Mayne et. al. (2000). Of particular note are the related methods of endpoint penalty and endpoint constraint stabilizing MPC. These techniques are presented in Michalska and Mayne (1993) and Muske and Rawlings (1993). In this methodology, the MPC is forced to move the plant to the desired steady state by constraining the system at the end of the prediction horizon to be at or near the set-point. The paradigm which forms the basis of this work is that of Lyapunov MPC, specifically of the form presented by Mhaskar et. al. (2005) and Mhaskar et. al. (2006). For this technique, a Lyapunov constraint is included which serves to continuously drive the system toward the desired set-point or a region near the set-point.

When assigning plant control and optimization to separate layers, it is important to design each layer effectively for its particular task. Toskukhowong et. al. (2004) use a reduced order model for the DRTO layer and operate it at a relatively low frequency, while employing a linear model at the MPC level at a much higher frequency. This promotes effective economic optimization while allowing for rapid control to ensure the plant remains in safe operation. Ellis and Christofides (2014) include both dynamics and control criteria at the DRTO level, emulating an Economic MPC (EMPC), while allowing for a more rapid MPC to execute at the lower level. Swartz and Kawajiri (2019) provide a review of dynamic optimization and its role in the control hierarchy.

An important development in the use of DRTO in the control hierarchy is the inclusion of a direct model of the underlying MPC layer by Jamaludin and Swartz (2015). In this paradigm, the DRTO predicts the behavior of both the plant and the MPC controlling it, thereby improving its overall prediction of the MPC-plant system and providing set-points which are optimal for that system rather than for the plant alone. It was shown that this closed-loop (CL) DRTO is more effective than a similar DRTO which did not account for the MPC behavior. Li and Swartz (2019) extended this technique to distributed MPC systems, where the DRTO would model each MPC separately while modelling the overall plant dynamics. Ramesh et. al. (2021) combine this CL DRTO method with a stabilizing MPC to allow for the improved performance of the CL DRTO to be used in applications which are open-loop (OL) unstable. Specifically, they used an endpoint penalty and endpoint constraint technique which was directly modelled by the CL DRTO. This system was shown to effectively stabilize OL unstable systems and outperform a DRTO with no MPC modelling.

For this work, the inclusion of a stability MPC in the CL DRTO methodology is extended to use with a Lyapunov-based stability MPC. As in previous techniques, the Lyapunov MPC is directly modelled by the CL DRTO. Specifically, the LMPC is converted into its first-order Karush-Kuhn-Tucker (KKT) conditions and these are then included as constraints in the DRTO. However, the Lyapunov MPC developed by Mhaskar et. al. (2005) uses a nonlinear plant model. This results in a non-convex optimization problem and the first-order KKT conditions of a non-convex problem are not sufficient for optimality. Therefore, the LMPC is here adapted to be convex quadratic. This was done by using a linear model in the plant prediction of the MPC and applying the Lyapunov constraint (which continues to use a nonlinear model) only for the first time-step of the MPC. This convex LMPC is then embedded in the CL DRTO as its KKT conditions, creating a single-level mathematical program with complementarity constraints (MPCC). The complementarity constraints are handled with an exact penalty approach (Ralph and Wright, 2004).

Both the new Lyapunov MPC formulation and the CL DRTO with LMPC prediction are then tested for stability and performance in a multi-input-multi-output (MIMO) jacketed CSTR case study. The results are compared to those obtained by the endpoint penalty and endpoint constraint MPC and CL DRTO with said MPC prediction developed by Ramesh et. al. (2021). In standalone operation, the LMPC performs similarly to the endpoint penalty MPC when the set-point is close to the linearization point, but is able to achieve steady-state far from the linearization point which the endpoint penalty MPC is unable to reach. Thus, the LMPC outperforms the endpoint penalty MPC as the set-point moves away from the linearization point as the nonlinear model of the Lyapunov constraint ensures stability despite the LMPC’s use of a linear plant prediction model. This suggests that the LMPC has a larger region of stability than a strictly linear endpoint penalty MPC.

The CL DRTO with LMPC is able to outperform, in terms of reduction of sum-of-squared-errors (SSE), both the standalone LMPC and the CL DRTO with endpoint penalty MPC when a target-tracking objective is used. This improvement is slight when the target is near the linearization point, as both MPC methods are able to stabilize the system at this point. However, the improvement is more substantial when the target is far from the linearization point. In this scenario, the CL DRTO with endpoint penalty MPC oscillates around the target, likely as a result of the CL DRTO attempting to compensate for the inability of the endpoint penalty MPC to reach this stead-state. In contrast, the CL DRTO with LMPC is able to quickly reach steady-state at the desired target with minimal oscillations. When the CL DRTO instead uses an economic objective function which drives the system to its upper bounds, the CL DRTO with LMPC greatly outperforms the CL DRTO with endpoint penalty MPC. This occurs both when the upper bounds are near the linearization point and when they are more distant. Specifically, the CL DRTO with endpoint penalty MPC is unable to reach steady-state at the desired upper bounds, while the CL DRTO with LMPC is able to do so.

Based on these results, the convex LMPC formulation presented here is an effective stabilizing MPC method which reduces the computational cost of a nonlinear stabilizing MPC and improves on the stability properties of a linear stabilizing MPC. It is also effective when used in conjunction with a CL DRTO. This result increases the possible applications of CL DRTO to more systems which are OL unstable, thus allowing for improved economic optimization performance without loss of effective stabilizing control.

References

Ellis, M. and Christofides, P.D., 2014. Optimal time-varying operation of nonlinear process systems with economic model predictive control. Industrial & Engineering Chemistry Research, 53(13), pp.4991-5001.

Jamaludin, M.Z. and Swartz, C.L.E., 2015. A bilevel programming formulation for dynamic real-time optimization. IFAC-PapersOnLine, 48(8), pp.906-911.

Li, H. and Swartz, C.L.E., 2019. Dynamic real-time optimization of distributed MPC systems using rigorous closed-loop prediction. Computers & Chemical Engineering, 122, pp.356-371.

Mayne, D.Q., Rawlings, J.B., Rao, C.V. and Scokaert, P.O., 2000. Constrained model predictive control: Stability and optimality. Automatica, 36(6), pp.789-814.

Mhaskar, P., El-Farra, N.H. and Christofides, P.D., 2005. Predictive control of switched nonlinear systems with scheduled mode transitions. IEEE Transactions on Automatic Control, 50(11), pp.1670-1680.

Mhaskar, P., El-Farra, N.H. and Christofides, P.D., 2006. Stabilization of nonlinear systems with state and control constraints using Lyapunov-based predictive control. Systems & Control Letters, 55(8), pp.650-659.

Michalska, H. and Mayne, D.Q., 1993. Robust receding horizon control of constrained nonlinear systems. IEEE transactions on automatic control, 38(11), pp.1623-1633.

Muske, K.R. and Rawlings, J.B., 1993. Linear model predictive control of unstable processes. Journal of Process Control, 3(2), pp.85-96.

Ralph, D. and Wright, S.J., 2004. Some properties of regularization and penalization schemes for MPECs. Optimization Methods and Software, 19(5), pp.527-556.

Ramesh, P.S., Swartz, C.L.E. and Mhaskar, P., 2021. Closed‐loop dynamic real‐time optimization with stabilizing model predictive control. AIChE Journal, 67(10), p.e17308.

Swartz, C.L.E. and Kawajiri, Y., 2019. Design for dynamic operation-A review and new perspectives for an increasingly dynamic plant operating environment. Computers & Chemical Engineering, 128, pp.329-339.

Tosukhowong, T., Lee, J.M., Lee, J.H. and Lu, J., 2004. An introduction to a dynamic plant-wide optimization strategy for an integrated plant. Computers & Chemical Engineering, 29(1), pp.199-208.