Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Examining Generalizability of AI Models for Catalysis

Published

Author(s)

Kamal Choudhary, Shih Han Wang, Hongliang Xin, Luke Achenie

Abstract

In this work, we investigate the generalizability of problem-specific machine-learning models for catalysis across different datasets and adsorbates, and examine the potential of unified models as pre-screening tools for density functional theory calculations. We develop graph neural network models for 12 different datasets for catalysis and then cross-evaluate their performance. Unified models include ALIGNN-FF, MATGL, CHGNet, and MACE. Pearson correlation coefficient analysis indicates that generalizability improves when similar adsorbates are used for training and testing or when a larger database is employed for training. Results demonstrate that while the accuracy of the unified models has room for improvement, their excellent performance in predicting the trend of adsorption energies can be a valuable pre-screening tool for selecting potential candidates prior to resource-intensive DFT calculations in catalyst design, thereby reducing computational expenses. The tools used in this work will be made available at: \urlhttps://github.com/usnistgov/catalysismat}.
Citation
APL Materials

Citation

Choudhary, K. , Wang, S. , Xin, H. and Achenie, L. (2025), Examining Generalizability of AI Models for Catalysis, APL Materials, [online], https://doi.org/10.1016/j.jcat.2025.116171, https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=956914 (Accessed August 8, 2025)

Issues

If you have any questions about this publication or are having problems accessing it, please contact [email protected].

Created June 7, 2025, Updated July 31, 2025
Was this page helpful?