(172e) Interpretability for Moving Toward Verification of Advanced and Data-Driven Control
AIChE Annual Meeting
2020
2020 Virtual AIChE Annual Meeting
Computing and Systems Technology Division
Advances in Process Control
Monday, November 16, 2020 - 9:00am to 9:15am
Motivated by the above considerations, this work provides an investigation of how interpretability may be incorporated within the context of optimization-based control, and particularly within the context of EMPC with both time-invariant first-principles models (but where it may still compute non-intuitive control actions to optimize profits), and when it utilizes a neural network model for the process that is updated using process data as the process conditions change over time. The initial focus will be on defining cases in which the actions of the EMPC would not be interpretable, and on defining interpretability in a variety of mathematical contexts (e.g., those based on pattern recognition or directionality in process behavior when the input trajectories take certain directions). It will be analyzed how the EMPC formulation may be modified to cause it to be more interpretable with respect to these different metrics. Subsequently, we will investigate how approaches to neural network interpretability from the literature (e.g., approaches which analyze which nodes are being activated for certain inputs to the neural network, and how sensitive the outputs of the network are to changes in certain weights [3,5]) can be used to seek to understand how the changes in the control actions over time as the process model changes might be made understandable to a human in light of the model modifications and the process data so that an operator could assess whether these actions are appropriate or not.
[1] M. Ellis, H. Durand, and P. D. Christofides. âA tutorial review of economic model predictive control methods.â Journal of Process Control, 24:1156â1178, 2014.
[2] H. Durand. âResponsive economic model predictive control for next-generation manufacturing.â Mathematics, 8:259, 38 pages, 2020.
[3] S. Chakraborty, R. Tomsett, R. Raghavendra, D. Harborne, M. Alzantot, F. Cerutti, M. Srivastava, A. Preece, S. Julier, R. M. Rao, T. D. Kelley, D. Braines, M. Sensoy, C. J. Willis, and P. Gurram. âInterpretability of deep learning models: A survey of results.â In Proceedings of the IEEE Smart World Congress, San Francisco, CA, 2017.
[4] Zhang, Q., Y. N. Wu, and S.-C. Zhu. âInterpretable convolutional neural networks.â In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, 8827-8836, 2018.
[5] Gilpin, L. H., D. Bau, B. Z. Yuan, A. Bajwa, M. Specter and L. Kagal. âExplaining explanations: An overview of interpretability of machine learning.â In Proceedings of the 2018 IEEE 5th International Conference on Data Science and Advanced Analytics, Turin, Italy, 80-89, 2018.