26
79

Reducing SO(3) Convolutions to SO(2) for Efficient Equivariant GNNs

Abstract

Graph neural networks that model 3D data, such as point clouds or atoms, are typically desired to be SO(3)SO(3) equivariant, i.e., equivariant to 3D rotations. Unfortunately equivariant convolutions, which are a fundamental operation for equivariant networks, increase significantly in computational complexity as higher-order tensors are used. In this paper, we address this issue by reducing the SO(3)SO(3) convolutions or tensor products to mathematically equivalent convolutions in SO(2)SO(2) . This is accomplished by aligning the node embeddings' primary axis with the edge vectors, which sparsifies the tensor product and reduces the computational complexity from O(L6)O(L^6) to O(L3)O(L^3), where LL is the degree of the representation. We demonstrate the potential implications of this improvement by proposing the Equivariant Spherical Channel Network (eSCN), a graph neural network utilizing our novel approach to equivariant convolutions, which achieves state-of-the-art results on the large-scale OC-20 and OC-22 datasets.

View on arXiv
Comments on this paper