(191k) Transfer Learning of Graph Neural Networks As a General Approach to Accelerate Computational Catalysis Modeling
AIChE Annual Meeting
2022
2022 Annual Meeting
Engineering Sciences and Fundamentals
Faculty Candidates in CoMSEF/Area 1a, Session 2
Monday, November 14, 2022 - 5:30pm to 5:42pm
An accurate understanding of the surface-adsorbate interaction is detrimental to designing novel heterogeneous catalysts for various applications. Recent development of graph neural network (GNN) models, together with large-scale catalysis datasets such as the Open Catalyst 2020 (OC20), has greatly improved the accuracy of machine learning surrogate models for catalytic thermodynamics, with much less prediction cost compared to quantum chemistry calculations. However, training such models requires a large amount of DFT-validated data which may not be readily available for many domain-specific problems, especially when calculated using even more expensive theories.
In this talk, we show transfer learning (TL) as a general approach for applying the pre-trained GNN models for data-limited domains. Several applications using TL to accelerate model building in computational catalysis are discussed: 1) Starting from a subset of OC20 calculated by dispersion-included BEEF-vdW functional as an example, we systematically study the transfer learning rate scaling, optimal dataset size, and sampling approach to balance the data collection cost and prediction accuracy. 2) Using a combination of the TL approach and active learning framework, we show the possibility to collect DFT-validated data for expensive methods including vdW functionals and meta-GGA functionals with orders of magnitude lower computational cost, which can be readily used for upgrading the OC20 dataset towards upper levels of Jacobâs ladder. 3) By properly designing the GNN architecture and loss function, the transfer-learned models are capable of predicting other quantities In addition to the adsorption energy. As an example, we demonstrate a twin neural network architecture to learn the covariance information from BEEF-vdW ensemble data which can be used to predict the relative stability between different adsorbate placement configurations with calibrated uncertainty quantification. We hope these TL-based strategies offer opportunities for building domain-specific models for computational catalysis with ease.
In this talk, we show transfer learning (TL) as a general approach for applying the pre-trained GNN models for data-limited domains. Several applications using TL to accelerate model building in computational catalysis are discussed: 1) Starting from a subset of OC20 calculated by dispersion-included BEEF-vdW functional as an example, we systematically study the transfer learning rate scaling, optimal dataset size, and sampling approach to balance the data collection cost and prediction accuracy. 2) Using a combination of the TL approach and active learning framework, we show the possibility to collect DFT-validated data for expensive methods including vdW functionals and meta-GGA functionals with orders of magnitude lower computational cost, which can be readily used for upgrading the OC20 dataset towards upper levels of Jacobâs ladder. 3) By properly designing the GNN architecture and loss function, the transfer-learned models are capable of predicting other quantities In addition to the adsorption energy. As an example, we demonstrate a twin neural network architecture to learn the covariance information from BEEF-vdW ensemble data which can be used to predict the relative stability between different adsorbate placement configurations with calibrated uncertainty quantification. We hope these TL-based strategies offer opportunities for building domain-specific models for computational catalysis with ease.