18
0

AnnoDPO: Protein Functional Annotation Learning with Direct Preference Optimization

Main:4 Pages
5 Figures
Bibliography:3 Pages
11 Tables
Appendix:8 Pages
Abstract

Deciphering protein function remains a fundamental challenge in protein representation learning. The task presents significant difficulties for protein language models (PLMs) due to the sheer volume of functional annotation categories and the highly imbalanced distribution of annotated instances across biological ontologies. Inspired by the remarkable success of reinforcement learning from human feedback (RLHF) in large language model (LLM) alignment, we propose AnnoDPO, a novel multi-modal framework for protein function prediction that leverages Direct Preference Optimization (DPO) to enhance annotation learning. Our methodology addresses the dual challenges of annotation scarcity and category imbalance through preference-aligned training objectives, establishing a new paradigm for biological knowledge integration in protein representation learning.

View on arXiv
@article{jiang2025_2506.07035,
  title={ AnnoDPO: Protein Functional Annotation Learning with Direct Preference Optimization },
  author={ Zixuan Jiang and Renjing Xu },
  journal={arXiv preprint arXiv:2506.07035},
  year={ 2025 }
}
Comments on this paper