(182d) Physics-Constrained Deep Learning of Unmodeled Physics in Systems Governed By Stochastic Differential Equations
AIChE Annual Meeting
2021
2021 Annual Meeting
Computing and Systems Technology Division
CAST Director's Student Presentation Award Finalists (Invited Talks)
Monday, November 8, 2021 - 4:15pm to 4:30pm
We propose a moment-matching strategy for training deep neural networks to learn constitutive equations that represent unmodeled physics in SDEs. The first step is to collect state trajectory data over time under various input profiles using either an experimental system or a high-fidelity simulator. Since the system evolution is inherently stochastic, the âexperimentâ must be repeated multiple times over a finite time so that moment trajectories can be estimated. Using the known structure of the SDE, we can apply established uncertainty propagation methods (e.g., unscented transform) to predict the moment trajectories over time for fixed neural network parameters; the unknown weight and bias values represent the unmodeled physics. To train these unknown neural network parameters, we first construct a loss function using the predicted and measured moments and then we develop an efficient training algorithm that leverages recent advances in automatic differentiation and stochastic gradient descent (SGD) [9-10].
We demonstrate the efficacy of the proposed framework on an in-silico three-dimensional system of self-assembling DNA functionalized colloids that have shown enormous promise for sensing and photonics applications [11]. This particular system is especially prone to kinetic arrest due to the complexity of its underlying and competing energetic driving forces, which include repulsive interactions among the underlying silica particles, repulsive interactions due to single-stranded DNA (ssDNA) chain overlap, and attractive interactions due to ssDNA hybridization. Specifically, we use our previously reported autoencoder-based dimensionality reduction framework to discover a set of order parameters to describe the self-assembly system state [12]. We next apply the proposed neural network-based moment-matching strategy to learn the free energy and diffusion landscapes within a low-dimensional Langevin equation that can be used to describe self-assembly dynamics [13]. We finally use these landscapes to analyze the relative importance of various kinetic traps and demonstrate how changes in external conditions can be used to avoid these kinetic traps and reach target structures.
References
(1) Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378, 686-707.
(2) Raissi, M., & Karniadakis, G. E. (2018). Hidden physics models: Machine learning of nonlinear partial differential equations. Journal of Computational Physics, 357, 125-141.
(3) Raissi, M. (2018). Deep hidden physics models: Deep learning of nonlinear partial differential equations. The Journal of Machine Learning Research, 19(1), 932-955.
(4) Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2018). Multistep neural networks for data-driven discovery of nonlinear dynamical systems. arXiv preprint arXiv:1801.01236.
(5) Yang, Y., & Perdikaris, P. (2019). Adversarial uncertainty quantification in physics-informed neural networks. Journal of Computational Physics, 394, 136-152.
(6) Zhang, D., Lu, L., Guo, L., & Karniadakis, G. E. (2019). Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems. Journal of Computational Physics, 397, 108850.
(7) Yang, L., Meng, X., & Karniadakis, G. E. (2021). B-PINNs: Bayesian physics-informed neural networks for forward and inverse PDE problems with noisy data. Journal of Computational Physics, 425, 109913.
(8) Van Kampen, N. G. (1976). Stochastic differential equations. Physics reports, 24(3), 171-228.
(9) Baydin, A. G., Pearlmutter, B. A., Radul, A. A., & Siskind, J. M. (2018). Automatic differentiation in machine learning: a survey. Journal of machine learning research, 18.
(10) Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., ... & Zheng, X. (2016). Tensorflow: A system for large-scale machine learning. In 12th {USENIX} symposium on operating systems design and implementation ({OSDI} 16) (pp. 265-283).
(11) Pretti, E., Zerze, H., Song, M., Ding, Y., Mahynski, N. A., Hatch, H. W., ... & Mittal, J. (2018). Assembly of three-dimensional binary superlattices from multi-flavored particles. Soft Matter, 14(30), 6303-6312.
(12) OâLeary, J., Mao, R., Pretti, E. J., Paulson, J. A., Mittal, J., & Mesbah, A. (2021). Deep learning for characterizing the self-assembly of three-dimensional colloidal systems. Soft Matter, 17(4), 989-999.
(13) Tang, X., Rupp, B., Yang, Y., Edwards, T. D., Grover, M. A., & Bevan, M. A. (2016). Optimal feedback controlled assembly of perfect crystals. ACS nano, 10(7), 6791-6798.