57
0

Sequence-level Large Language Model Training with Contrastive Preference Optimization

Abstract

The next token prediction loss is the dominant self-supervised training objective for large language models and has achieved promising results in a variety of downstream tasks. However, upon closer investigation of this objective, we find that it lacks an understanding of sequence-level signals, leading to a mismatch between training and inference processes. To bridge this gap, we introduce a contrastive preference optimization (CPO) procedure that can inject sequence-level information into the language model at any training stage without expensive human labeled data. Our experiments show that the proposed objective surpasses the next token prediction in terms of win rate in the instruction-following and text generation tasks.

View on arXiv
@article{feng2025_2502.16433,
  title={ Sequence-level Large Language Model Training with Contrastive Preference Optimization },
  author={ Zhili Feng and Dhananjay Ram and Cole Hawkins and Aditya Rawal and Jinman Zhao and Sheng Zha },
  journal={arXiv preprint arXiv:2502.16433},
  year={ 2025 }
}
Comments on this paper