(674a) Enhancing Understanding of MPC Control Actions through Explainable AI Approaches | AIChE

(674a) Enhancing Understanding of MPC Control Actions through Explainable AI Approaches

Authors 

Srinivasan, R. - Presenter, Indian Institute of Technology Madras
Pathak, A. - Presenter, Indian Institute of Technology Madras
Garg, A. - Presenter, McMaster University
Process industries with modern complex processes have various interconnected units that interact continuously. It is crucial to oversee these units meticulously to boost production and improve overall plant efficiency. In order to achieve this, Model predictive control (MPC) technology is employed, as it provides superior control for multivariable systems. MPC often takes actions that improve plant performance beyond what a skilled and experienced operator can achieve. However, one of the critical challenges MPC faces, often overlooked, is the significance of human factors, which play a pivotal role in the effective functioning of MPC solutions. MPC employs a process model to obtain optimized control sequences while considering system constraints and process variable interdependencies during operation. This complex algorithmic structure of MPC may not align with the intuitive single-loop control mechanism familiar to operators (Lindscheid et al., 2016). Consequently, operators can perceive MPC as a black box model that lacks interpretability.

This comprehension challenge can worsen in the case of complex systems, leading to numerous reported instances where operators have disabled MPC when they are not able to understand its control actions, resulting in diminished trust in the system's control actions. To address this comprehension challenges, we propose an Explainable Artificial Intelligence (XAI) based methodology to explain MPC's decisions to operators, thereby enhancing alignment between human intuition and automated system actions.

XAI is a set of tools and techniques to generate high-quality, interpretable, intuitive, human-understandable explanations for AI prediction (Das and Rad, 2020). XAI techniques are categorized based on their integration of explanations into the model. Explanations can either be inherently embedded within the AI model, known as the intrinsic approach. This is observed in decision trees employing hierarchical if-then rules. Alternatively, explanations can be introduced as a post-processing step without impacting model performance, termed the post-hoc approach. Post-hoc methods are further divided into model-specific and model-agnostic categories (Arrieta et al., 2020). Model-specific methods are tailored for particular AI algorithms like DNNs. A notable example is the Class Activation Map, offering visual explanations for CNNs (Sun et al., 2020). Conversely, model-agnostic methods function independently of the structure of AI algorithms. Limit-based explanations for monitoring (LEMON) exemplifies a model-agnostic method as it operates independently of internal processing and model representation (Bhakte et al., 2023).

The aforementioned taxonomy provides a broad overview of XAI techniques for selecting the appropriate method tailored to a specific domain. In MPC setting, its complexity often leads to decisions that may not align with operator’s mental model, presenting challenges similar to those encountered in AI systems. One way to deal with it is to utilize historical data from the MPC model to train the AI model and then employ the XAI approach for explanations. However, this approach may result in the loss of critical information. To address this, we can directly employ the mathematical framework of the XAI method and implement that directly to the deterministic MPC model and explain its output. In this work, we employ the post-hoc model agnostic method because MPC is already an existing model we want to explain.

Let’s consider an MPC model ƒ, which is designed on standard principles of General Model Predictive Control. The model ƒ computes the control action û at each sampling instant to achieve the desired output trajectory. To understand the control action û made by MPC, the XAI method is employed, which provides the explanations for û, highlighting the contribution of each input variable to this change. This helps the operator get valuable insights into the variables influencing the MPC output and helps the operator in understanding the dynamics of the closed-loop process.

In this study, we will demonstrate the effectiveness of the proposed methodology through a simple CSTR process followed by a benchmark Tennessee Eastman Process. The results will emphasize the explanations during high-stakes decision-making, thereby enhancing the operator's and other end-user's trust in control system actions. Likewise, the XAI techniques not only provide explanations for the black box AI model but also for the complex control systems, enhancing industrial performance.

References

Arrieta, A.B., Díaz-Rodríguez, N., Ser, J.D., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., Herrera, F., 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012

Bhakte, A., Chakane, M., Srinivasan, R., 2023. Alarm-based Explanations of Process Monitoring Results from Deep Neural Networks. Computers & Chemical Engineering 179, 108442. https://doi.org/10.1016/j.compchemeng.2023.108442

Das, A., Rad, P., 2020. Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey. ArXiv abs/2006.11371.

Lindscheid, C., Bremer, A., Haßkerl, D., Tatulea-Codrean, A., Engell, S., 2016. A Test Environment to Evaluate the Integration of Operators in Nonlinear Model-Predictive Control of Chemical Processes11The research leading to these results has received funding from the European Commission under grant agreement number 291458 (ERC Advanced Investigator Grant MOBOCON). IFAC-PapersOnLine 49, 129–134. https://doi.org/10.1016/j.ifacol.2016.12.202

Sun, K.H., Huh, H., Tama, B.A., Lee, S.Y., Jung, J.H., Lee, S., 2020. Vision-Based Fault Diagnostics Using Explainable Deep Learning With Class Activation Maps. IEEE Access 8, 129169–129179. https://doi.org/10.1109/ACCESS.2020.3009852