70
v1v2 (latest)

Equivariant Evidential Deep Learning for Interatomic Potentials

Zhongyao Wang
Taoyong Cui
Jiawen Zou
Shufei Zhang
Bo Yan
Wanli Ouyang
Weimin Tan
Mao Su
Main:8 Pages
7 Figures
Bibliography:2 Pages
8 Tables
Appendix:7 Pages
Abstract

Uncertainty quantification (UQ) is critical for assessing the reliability of machine learning interatomic potentials (MLIPs) in molecular dynamics (MD) simulations, identifying extrapolation regimes and enabling uncertainty-aware workflows such as active learning for training dataset construction. Existing UQ approaches for MLIPs are often limited by high computational cost or suboptimal performance. Evidential deep learning (EDL) provides a theoretically grounded single-model alternative that determines both aleatoric and epistemic uncertainty in a single forward pass. However, extending evidential formulations from scalar targets to vector-valued quantities such as atomic forces introduces substantial challenges, particularly in maintaining statistical self-consistency under rotational transformations. To address this, we propose \textit{Equivariant Evidential Deep Learning for Interatomic Potentials} (e2\text{e}^2IP), a backbone-agnostic framework that models atomic forces and their uncertainty jointly by representing uncertainty as a full 3×33\times3 symmetric positive definite covariance tensor that transforms equivariantly under rotations. Experiments on diverse molecular benchmarks show that e2\text{e}^2IP provides a stronger accuracy-efficiency-reliability balance than the non-equivariant evidential baseline and the widely used ensemble method. It also achieves better data efficiency through the fully equivariant architecture while retaining single-model inference efficiency.

View on arXiv
Comments on this paper