Understanding the Capabilities of Molecular Graph Neural Networks in Materials Science Through Multimodal Learning and Physical Context Encoding

Molecular graph neural networks (GNNs) often focus exclusively on XYZ-based geometric representations and thus overlook valuable chemical context available in public databases like PubChem. This work introduces a multimodal framework that integrates textual descriptors, such as IUPAC names, molecular formulas, physicochemical properties, and synonyms, alongside molecular graphs. A gated fusion mechanism balances geometric and textual features, allowing models to exploit complementary information. Experiments on benchmark datasets indicate that adding textual data yields notable improvements for certain electronic properties, while gains remain limited for others. Furthermore, the GNN architectures display similar performance patterns (improving and deteriorating on analogous targets), suggesting they learn comparable representations rather than distinctly different physical insights.
View on arXiv@article{polat2025_2505.12137, title={ Understanding the Capabilities of Molecular Graph Neural Networks in Materials Science Through Multimodal Learning and Physical Context Encoding }, author={ Can Polat and Hasan Kurban and Erchin Serpedin and Mustafa Kurban }, journal={arXiv preprint arXiv:2505.12137}, year={ 2025 } }