(15d) Enhancing the Resilience of Process Operations: Integrating Adversarially Robust Real-Time Optimization and Control with Adaptive Learning
AIChE Annual Meeting
2024
2024 AIChE Annual Meeting
Computing and Systems Technology Division
10C: Design and Operations Under Uncertainty
Sunday, October 27, 2024 - 4:33pm to 4:54pm
RTO plays a crucial role in the process operation hierarchy by determining optimal set-points for the lower-level controllers. However, at the control layers, these set-points may be difficult to track because of challenges in implementation due to disturbances and noise. This could be handled by robustifying the controllers to these perturbations however this may add additional complexity to the already resource constrained control layer. Instead, to tackle this challenge, our recent work introduced the Adversarially Robust Real-Time Optimization and Control (ARRTOC) algorithm [1]. Drawing inspiration from adversarial machine learning [4, 5], ARRTOC utilises the RTO layer to identify set-points that are inherently robust to implementation errors, such as disturbances, thus alleviating demands on the resource-limited controllers. ARRTOC handles the dual problem of optimality and operability seamlessly as part of a robust online RTO algorithm, the details of which can be found in [1]. By design, the chosen set-point is insensitive to potential disturbances and noise. This concept is best illustrated visually as per Figure 1a and b, where the performance of a controller around two possible set-points is compared: the global optimum (scenario 1 in blue) and the adversarially robust optimum (scenario 2 in red). We observe that, paradoxically, operating at the adversarially robust optimum yields a 30% larger mean objective value compared to operating at the global optimum due to its inherent robustness.
However, to operate successfully, ARRTOC relies on an accurate steady-state model of the system under consideration [6]. Indeed, this model dependency issue and the subsequent challenge of plant-model mismatch is common to all existing RTO formulations [7]. To address this, in this work, we extend ARRTOC with Adaptive Gaussian Process Learning (AGPL) [2,3]. AGPL employs Gaussian Process (GP) regression to learn the underlying steady-state model of the objective function and overcomes plant-model mismatch through dynamic GP adaptation. Additionally, GPs, being a non-parametric modelling approach, possess a unique advantage â the ability to address structural mismatch in models, which is typically beyond the capabilities of current state-of-the-art approaches like model-parameter adaptation or modifier adaptation [7].
We illustrate the algorithm's capabilities through a case study of a multi-loop evaporator process [1]. In this study, ARRTOC customises controller set-points for each loop based on anticipated disturbances, and AGPL minimizes plant-model disparities, ensuring an accurate solution is obtained. This extended framework showcases increased adaptability and robustness, effectively handling uncertainties at both RTO and control layers.
The system consists of a feed stream containing two components: a solute dissolved in a volatile solvent. Heat is supplied via a steam line to evaporate the solvent. The evaporator has both vapour and liquid outlets. Our control strategy employs three PI controllers. The primary controlled variable is the solute composition in the liquid product stream which is controlled via the steam temperature. We control the liquid level by manipulating the liquid product stream flowrate and the evaporator pressure which is controlled by adjusting the vapour flow rate. The controller tuning was performed using a modified version of the sequential relay auto-tuning method combined with a derivative free optimizer (BOBYQA). The system model can be found in [1]. The RTO goal is to find the set-points for the states, which maximize the profit of the process subject to operational and safety constraints. A visual depiction of the objective function and constraints is depicted in Figure 2 where the âtrueâ system is depicted in Figure 2a while the mismatched model equivalent is shown in Figure 2b. It is clear to see that there is a stark difference between the two plots due to the model mismatch.
We consider 4 scenarios to demonstrate the importance of adopting both ARRTOC and GP adaptation. For all scenarios, we run simulations of 100 timesteps. We use either nominal RTO or the ARRTOC algorithm to solve the RTO problem with the assumed model of the system depending on the scenario. The chosen set-points are then sent to the PI controllers and data is collected from the system. For the scenarios which employ GP adaptation, the data is used to correct the model every 10 timesteps and the RTO problem is re-solved. We perform 10 simulations for each scenario and report the mean profit delivered over the length of the simulation in Figure 3. Scenarios 1 and 2 serve as performance benchmarks, defining the best and worst-case situations. Scenario 1, the best-case, represents ARRTOC solved assuming perfect model knowledge (Figure 2a). This is the best case as it accounts for the underlying controller robustness via ARRTOC and returns an accurate RTO solution as there is no model mismatch. This is depicted as a horizontal dashed green line in Figure 3. Scenario 2, the worst-case, represents nominal RTO solved with the incorrect model (Figure 2b) with no adaptation. This case neither accounts for the underlying controller design via ARRTOC nor does it address the plant-model mismatch. This is depicted as a horizontal dashed red line in Figure 3. Scenarios 3 and 4 demonstrate the significance of employing both ARRTOC and GP adaptation in tandem. Scenario 3 showcases ARRTOC with GP adaptation as a solid green line in Figure 3, while Scenario 4 illustrates nominal RTO with GP adaptation as a solid red line. Note that both scenarios start with the incorrect model as per Figure 2b. Clearly, ARRTOC with adaptation outperforms nominal RTO with adaptation as it tends towards the best-case performance. This arises from the following distinction: while both approaches address the plant-model mismatch through GP adaptation, nominal RTO lacks an essential element, which is accounting for the underlying controller robustness. In the absence of such consideration, nominal RTO may lead to the selection of a set-point that is incompatible with the controllers, akin to the concept illustrated in Figure 1a and b. On the contrary, ARRTOC ensures that the chosen set-points align with the underlying controller design, while GP adaptation diligently maintains model accuracy. This synergistic combination results in superior performance and more effective operations in real-world applications.
In conclusion, the ARRTOC algorithm, together with the AGPL extension, address the fundamental challenges of managing implementation errors and plant-model mismatch in a holistic manner. ARRTOC's ability to identify set-points robust to disturbances and noise, while AGPL's dynamic Gaussian Process adaptation overcomes plant-model disparities, collectively contribute to more adaptable and resilient process operations. The successful integration of these machine learning techniques with process systems engineering offers a promising pathway towards achieving enhanced operability and precision in real-time control strategies, ultimately facilitating more efficient and sustainable process operations.
[1] A. Ahmed, E. A. del Rio-Chanona, M. Mercangöz, âARRTOC: Adversarially Robust Real Time Optimization and Controlâ, arXiv preprint, arXiv:2309.04386, 2023.
[2] A. Ahmed, M. Zagorowska, E. A. del Rio-Chanona, M. Mercangöz, âApplication of gaussian processes to online approximation of compressor maps for load-sharing in a compressor station,â European Control Conference (ECC), 2022.
[3] E. A. del Rio-Chanona, P. Petsagkourakis, B. Eric, J. E. A. Graciano, B. Chachuat, âReal-time optimization meets Bayesian optimization and derivative-free optimization: A tale of modifier adaptation,â Computers & Chemical Engineering, vol. 147, 2021.
[4] D. Bertsimas, O. Nohadani, and K. M. Teo, âRobust optimization for unconstrained simulation-based problems,â Operations Research, vol. 58, 2010.
[5] Bogunovic, I. et al. (2018) âAdversarially Robust Optimization with Gaussian Processes,â Advances in Neural Information Processing Systems 31 (NeurIPS 2018).
[6] B. Chachuat, B. Srinivasan, D. Bonvin, âAdaptation strategies for real-time optimization,â Computers & Chemical Engineering, vol. 33, 2009.
[7] D. F. Mendoza, J. E. A. Graciano, F. S. Liporace G. Carrilo Le Roux, âAssessing the reliability of different real-time optimization methodologies,â The Canadian Journal of Chemical Engineering , vol. 94, 2015.