(674a) Enhancing Understanding of MPC Control Actions through Explainable AI Approaches
AIChE Annual Meeting
2024
2024 AIChE Annual Meeting
Computing and Systems Technology Division
10B: AI/ML Modeling, Optimization and Control Applications I
Thursday, October 31, 2024 - 12:30pm to 12:46pm
This comprehension challenge can worsen in the case of complex systems, leading to numerous reported instances where operators have disabled MPC when they are not able to understand its control actions, resulting in diminished trust in the system's control actions. To address this comprehension challenges, we propose an Explainable Artificial Intelligence (XAI) based methodology to explain MPC's decisions to operators, thereby enhancing alignment between human intuition and automated system actions.
XAI is a set of tools and techniques to generate high-quality, interpretable, intuitive, human-understandable explanations for AI prediction (Das and Rad, 2020). XAI techniques are categorized based on their integration of explanations into the model. Explanations can either be inherently embedded within the AI model, known as the intrinsic approach. This is observed in decision trees employing hierarchical if-then rules. Alternatively, explanations can be introduced as a post-processing step without impacting model performance, termed the post-hoc approach. Post-hoc methods are further divided into model-specific and model-agnostic categories (Arrieta et al., 2020). Model-specific methods are tailored for particular AI algorithms like DNNs. A notable example is the Class Activation Map, offering visual explanations for CNNs (Sun et al., 2020). Conversely, model-agnostic methods function independently of the structure of AI algorithms. Limit-based explanations for monitoring (LEMON) exemplifies a model-agnostic method as it operates independently of internal processing and model representation (Bhakte et al., 2023).
The aforementioned taxonomy provides a broad overview of XAI techniques for selecting the appropriate method tailored to a specific domain. In MPC setting, its complexity often leads to decisions that may not align with operatorâs mental model, presenting challenges similar to those encountered in AI systems. One way to deal with it is to utilize historical data from the MPC model to train the AI model and then employ the XAI approach for explanations. However, this approach may result in the loss of critical information. To address this, we can directly employ the mathematical framework of the XAI method and implement that directly to the deterministic MPC model and explain its output. In this work, we employ the post-hoc model agnostic method because MPC is already an existing model we want to explain.
Letâs consider an MPC model Æ, which is designed on standard principles of General Model Predictive Control. The model Æ computes the control action û at each sampling instant to achieve the desired output trajectory. To understand the control action û made by MPC, the XAI method is employed, which provides the explanations for û, highlighting the contribution of each input variable to this change. This helps the operator get valuable insights into the variables influencing the MPC output and helps the operator in understanding the dynamics of the closed-loop process.
In this study, we will demonstrate the effectiveness of the proposed methodology through a simple CSTR process followed by a benchmark Tennessee Eastman Process. The results will emphasize the explanations during high-stakes decision-making, thereby enhancing the operator's and other end-user's trust in control system actions. Likewise, the XAI techniques not only provide explanations for the black box AI model but also for the complex control systems, enhancing industrial performance.
References
Arrieta, A.B., Díaz-Rodríguez, N., Ser, J.D., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., Herrera, F., 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58, 82â115. https://doi.org/10.1016/j.inffus.2019.12.012
Bhakte, A., Chakane, M., Srinivasan, R., 2023. Alarm-based Explanations of Process Monitoring Results from Deep Neural Networks. Computers & Chemical Engineering 179, 108442. https://doi.org/10.1016/j.compchemeng.2023.108442
Das, A., Rad, P., 2020. Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey. ArXiv abs/2006.11371.
Lindscheid, C., Bremer, A., HaÃkerl, D., Tatulea-Codrean, A., Engell, S., 2016. A Test Environment to Evaluate the Integration of Operators in Nonlinear Model-Predictive Control of Chemical Processes11The research leading to these results has received funding from the European Commission under grant agreement number 291458 (ERC Advanced Investigator Grant MOBOCON). IFAC-PapersOnLine 49, 129â134. https://doi.org/10.1016/j.ifacol.2016.12.202
Sun, K.H., Huh, H., Tama, B.A., Lee, S.Y., Jung, J.H., Lee, S., 2020. Vision-Based Fault Diagnostics Using Explainable Deep Learning With Class Activation Maps. IEEE Access 8, 129169â129179. https://doi.org/10.1109/ACCESS.2020.3009852