(177b) Potential for Explainability in Industrially Deployed MPC | AIChE

(177b) Potential for Explainability in Industrially Deployed MPC

Authors 

Garg, A. - Presenter, McMaster University
Srinivasan, R. - Presenter, Indian Institute of Technology Madras


The advent of digitalization in the chemical process industry has brought Artificial Intelligence (AI) into the spotlight, as it offers the potential to optimize operations, increase efficiency, and enhance safety. However, the inherent lack of explainability and transparency in AI systems poses a challenge, making it difficult for non-technical end-users, particularly plant operators, to comprehend the black-box nature of AI-driven decisions. To address this issue, recent years have witnessed the development of novel techniques collectively referred to as Explainable Artificial Intelligence (XAI). XAI methods are pivotal in making AI model results understandable to end-users by providing visual, rule-based, feature-importance, or example-based explanations(Arrieta et al., 2020).

In industry, Model predictive control (MPC) is a widely accepted technology that leverages a process model to obtain optimized control sequences while considering system constraints and process variable interdependencies. Although MPC is not entirely a black box, its complexity often results in decisions that may not align with the intuitive control mechanisms familiar to operators (Lindscheid et al., 2016), thus presenting challenges similar to AI systems. To overcome these challenges, we study the explainability of MPC results generated for operators, thus bridging the gap between operator understanding and MPC decisions.

The Model Predictive Control (MPC) system, in its pursuit of process efficiency, often makes intricate decisions in various scenarios that prove challenging for operators to comprehend. Operating on optimization principles, MPC's methodology is not inherently intuitive for non-technical personnel, including operators. In certain situations, operators may mistakenly perceive MPC as malfunctioning and switch to regulatory / manual controls, even when it functions correctly. This misinterpretation can deviate the process from optimal operation, thereby increasing the operational cost, and significantly increase the workload for operators, particularly in complex industrial settings. The decision to deactivate MPC may arise from the operator's difficulty in grasping the underlying logic of MPC's decision-making. Such issues are exacerbated when operators are not fully acquainted with MPC's optimization objectives or control strategies. For instance, the operator works in the upstream process, and MPC optimizes its objective by considering a few variables from the downstream process, which may not be straightforward for upstream operator to comprehend. Further, such scenarios may offer an opportunity for the designer of MPC to revisit the modeling and optimization framework and make modifications, if required. Therefore, there is a compelling need for MPC systems to maximize transparency, elucidating their operational processes and decision-making mechanisms to enhance mutual understanding between operators and the MPC system. Hence, we use XAI methodologies to interpret the MPC decisions.

XAI techniques can be classified based on how to integrate explanation into the model. Explanations may be an inherent part of a specific machine learning model, known as the intrinsic approach. Notable examples of this approach include decision trees, which rely on hierarchical if-then rules. In contrast, explanations can be introduced as a post-processing step without affecting the model's performance, referred to as the post-hoc approach. Post-hoc methods can be further categorized into model-specific and model-agnostic (Arrieta et al., 2020). Model-specific methods are specially designed for specific machine-learning algorithms such as DNNs. Class Activation Map is one such example that provides a visual explanation for CNNs (Sun et al., 2020). On the other hand, model-agnostic methods work with any type of machine-learning model. The Limit-based explanations for monitoring (LEMON) is a model-agnostic method for explainability because it is independent of internal processing and model representation (Bhakte et al., 2023). The LEMON method uses alarm limits to explain the process monitoring results by building a local model in the vicinity of the process sample. The above taxonomy provides a broad overview of XAI techniques for selecting the appropriate XAI techniques tailored to a specific domain.

In this paper, we underscore the need for explainability of MPC results and demonstrate it through a case study. We utilize a post-hoc model agnostic method for generating explanations as MPC already has an existing model that we want to explain. The results from our study will underline the benefits of explainability in high-stakes decision-making for non-technical end-users, such as operators. In summary, XAI techniques play a vital role in ensuring transparency and comprehensibility, not only in AI systems but also in complex control systems like MPC, ultimately improving decision-making and operational performance in industrial settings.

References

Arrieta, A.B., Díaz-Rodríguez, N., Ser, J.D., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., Herrera, F., 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012

Bhakte, A., Chakane, M., Srinivasan, R., 2023. Alarm-based Explanations of Process Monitoring Results from Deep Neural Networks. Computers & Chemical Engineering 179, 108442. https://doi.org/10.1016/j.compchemeng.2023.108442

Lindscheid, C., Bremer, A., Haßkerl, D., Tatulea-Codrean, A., Engell, S., 2016. A Test Environment to Evaluate the Integration of Operators in Nonlinear Model-Predictive Control of Chemical Processes11The research leading to these results has received funding from the European Commission under grant agreement number 291458 (ERC Advanced Investigator Grant MOBOCON). IFAC-PapersOnLine 49, 129–134. https://doi.org/10.1016/j.ifacol.2016.12.202

Sun, K.H., Huh, H., Tama, B.A., Lee, S.Y., Jung, J.H., Lee, S., 2020. Vision-Based Fault Diagnostics Using Explainable Deep Learning With Class Activation Maps. IEEE Access 8, 129169–129179. https://doi.org/10.1109/ACCESS.2020.3009852