(298c) Quantifying the Invertibility of Neural Networks and Their Transformations
AIChE Annual Meeting
2022
2022 Annual Meeting
Computing and Systems Technology Division
Advances in Computational Methods and Numerical Analysis - I
Tuesday, November 15, 2022 - 1:08pm to 1:27pm
We extend these tools towards analyzing the invertibility of transformations between neural networks [3], including those arising from pruning. For this purpose, we formulate and solve optimization problems, in the form of mixed-integer programming (MIP), which quantify the "safety" of the current operating point from temporal / functional noninvertibility in terms of several different norms [4, 5].
(This work was in part in collaboration with Profs. G. Pappas and M. Morari at the University of Pennsylvania.)
[1] N. Gicquel, J.S. Anderson, and I.G. Kevrekidis. Noninvertibility and resonance in discrete-time neural networks for time-series processing. Physics Letters A, 238(1):8â18, 1998.
[2] R. Rico-Martinez, I.G. Kevrekidis, and R.A. Adomaitis. Noninvertibility in neural networks. IEEE International Conference on Neural Networks, pages 382â386 vol.1, 1993.
[3] T. Bertalan, F. Dietrich, and I.G. Kevrekidis. Transformations between deep neural networks. arXiv preprint arXiv:2007.05646, 2020.
[4] V. Tjeng, K. Xiao, and R. Tedrake. Evaluating robustness of neural networks with mixed integer programming. arXiv preprint arXiv:1711.07356, 2017.
[5] T.-W. Weng, H. Zhang, H. Chen, Z. Song, C.-J. Hsieh, D. Boning, I. S, Dhillon, and L. Daniel. Towards fast computation of certified robustness for relu networks. arXiv preprint arXiv:1804.09699, 2018.