Multi-Modality Representation Learning for Antibody-Antigen Interactions Prediction

While deep learning models play a crucial role in predicting antibody-antigen interactions (AAI), the scarcity of publicly available sequence-structure pairings constrains their generalization. Current AAI methods often focus on residue-level static details, overlooking fine-grained structural representations of antibodies and their inter-antibody similarities. To tackle this challenge, we introduce a multi-modality representation approach that integates 3D structural and 1D sequence data to unravel intricate intra-antibody hierarchical relationships. By harnessing these representations, we present MuLAAIP, an AAI prediction framework that utilizes graph attention networks to illuminate graph-level structural features and normalized adaptive graph convolution networks to capture inter-antibody sequence associations. Furthermore, we have curated an AAI benchmark dataset comprising both structural and sequence information along with interaction labels. Through extensive experiments on this benchmark, our results demonstrate that MuLAAIP outperforms current state-of-the-art methods in terms of predictive performance. The implementation code and dataset are publicly available atthis https URLfor reproducibility.
View on arXiv@article{guo2025_2503.17666, title={ Multi-Modality Representation Learning for Antibody-Antigen Interactions Prediction }, author={ Peijin Guo and Minghui Li and Hewen Pan and Ruixiang Huang and Lulu Xue and Shengqing Hu and Zikang Guo and Wei Wan and Shengshan Hu }, journal={arXiv preprint arXiv:2503.17666}, year={ 2025 } }