(222g) Enhancing Operator's Trust in AI-Based Process Monitoring Technologies: Providing Explanations for Multi-Mode Processes | AIChE

(222g) Enhancing Operator's Trust in AI-Based Process Monitoring Technologies: Providing Explanations for Multi-Mode Processes

Authors 

In process industries, effective handling of abnormal events entails detecting and diagnosing issues promptly, as well as providing fast and reliable decision support to operators so they may take appropriate steps to avoid disturbances from escalating. With the advent of Industry 4.0, industrial processes now have unique opportunities to use artificial intelligence methods to detect and correct operational abnormalities. Deep learning methods for Fault Diagnosis and Detection (FDD) have received significant attention in the last few years[1]. However, the inability of black-box deep learning systems to provide human-interpretable outputs is a significant shortcoming that prevents its widespread adoption. Plant operators need intelligent systems that correctly identify faults and provide explanations for the results generated by Deep Neural Networks (DNNs). These explanations will enhance the plant operator's trust in the DNN by providing the reasons for the DNN's prediction. Using these, the operator can independently verify if they can trust the predictions and hence use the DNN with confidence. We propose such a system in this work.

Explainable Artificial Intelligence (XAI) is an emerging field of research that aims at explaining predictions of DNNs. It seeks to interpret the predictions that humans can understand by assigning a relevance or contribution to each input variable for a given sample. In XAI research, two separate approaches have been pursued, i.e., inherently interpretable AI methods (such as decision trees) and posthoc methods [2]. Posthoc approaches are used to explain pre-developed models; they regard the model as a black box and have no effect on the model's performance. Recently, XAI-based strategies have successfully explained the results generated by DNNs for FDD using different methods, including Integrated Gradients [3] and Layerwise Relevance Propagation (LRP)[4].

Previously [3], we presented a method for the interpretability of DNNs used for fault diagnosis using the Integrated Gradients (IG) method. It is motivated by the concept of Shapely value [5] derived from cooperative game theory. The IG is a gradient-based attribution, a technique to explain DNNs by attributing them to the neural network's inputs. The obtained attributions are the summation of the difference between the target output and the output as evaluated at the baseline [6]. These are used during real-time process monitoring to identify key variables responsible for the fault. Explanations are calculated using a windowed attribution scheme to make them robust to process noise. An inseparability metric is also used to highlight the most pertinent explanatory variables.

The above IG-based method and other similar explainability techniques reported in the literature are limited to the situation where the process has one nominal operating region. These assumptions often do not hold for real industrial chemical processes, where several operating modes may be prevalent even during nominal operations [7]. In such cases, the previously developed XAI methods fail to provide correct explanations even when the process is operating under normal conditions since any deviation from the baseline is considered to be a change. We seek to overcome this limitation in the current work.

In this work, we report a novel strategy for generating explanations of a DNN developed for multi-mode operation. The proposed methodology comprises two distinct components – (a) a bank of XAI models, and (b) a real-time model selector. The bank of IG-based XAI models, one for each operating mode, is developed offline. Each of these XAI models relies on data and DNN results from an operating mode and uses a mode-specific baseline. Hence it can provide accurate explanations – attributions to the input process variables - when the plant operates in the corresponding mode. The model-selector component utilizes real-time process data to identify the plant's operating mode dynamically and hence determines the XAI model that should be used to obtain explanations [8]. The efficiency of the proposed methodology is demonstrated using the Tennessee-Eastman challenge process [9]. In this paper, we will describe the overall architecture of the proposed technique. A comparison of the results from the multi-mode IG with those from a traditional IG will also be reported.

References

[1] Z. Jiao, P. Hu, H. Xu, and Q. Wang, “Machine Learning and Deep Learning in Chemical Health and Safety: A Systematic Review of Techniques and Applications,” ACS Chem. Health Saf., vol. 27, no. 6, pp. 316–334, 2020, doi: 10.1021/acs.chas.0c00075.

[2] F. K. Dosilovic, M. Brcic, and N. Hlupic, “Explainable artificial intelligence: A survey,” in 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2018, pp. 210–215. doi: 10.23919/MIPRO.2018.8400040.

[3] A. Bhakte, V. Pakkiriswamy, and R. Srinivasan, “An Explainable Artificial Intelligence Based Approach for Interpretation of Fault Classification Results from Deep Neural Networks,” Chem. Eng. Sci., p. 117373, 2021, doi: https://doi.org/10.1016/j.ces.2021.117373.

[4] P. Agarwal, M. Tamer, and H. Budman, “Explainability: Relevance based dynamic deep learning algorithm for fault detection and diagnosis in chemical processes,” Comput. Chem. Eng., vol. 154, p. 107467, 2021, doi: https://doi.org/10.1016/j.compchemeng.2021.107467.

[5] L. S. Shapley, “17. A Value for n-Person Games,” in Contributions to the Theory of Games (AM-28), Volume II, Princeton University Press, 2016, pp. 307–318. doi: doi:10.1515/9781400881970-018.

[6] M. Sundararajan, A. Taly, and Q. Yan, “Axiomatic attribution for deep networks,” in 34th International Conference on Machine Learning, ICML 2017, 2017, vol. 7, pp. 5109–5118. [Online]. Available: arXiv:1703.01365

[7] S. J. Zhao, J. Zhang, and Y. M. Xu, “Monitoring of Processes with Multiple Operating Modes through Multiple Principle Component Analysis Models,” Ind. Eng. Chem. Res., vol. 43, no. 22, pp. 7025–7035, Oct. 2004, doi: 10.1021/ie0497893.

[8] R. Srinivasan, P. Viswanathan, H. Vedam, and A. Nochur, “A framework for managing transitions in chemical plants,” Comput. Chem. Eng., vol. 29, no. 2, pp. 305–322, 2005, doi: https://doi.org/10.1016/j.compchemeng.2004.09.024.

[9] J. J. Downs and E. F. Vogel, “A plant-wide industrial process control problem,” Comput. Chem. Eng., vol. 17, no. 3, pp. 245–255, Mar. 1993, doi: 10.1016/0098-1354(93)80018-I.

Topics