8
0

Revela: Dense Retriever Learning via Language Modeling

Main:10 Pages
8 Figures
Bibliography:4 Pages
14 Tables
Appendix:6 Pages
Abstract

Dense retrievers play a vital role in accessing external and specialized knowledge to augment language models (LMs). Training dense retrievers typically requires annotated query-document pairs, which are costly and hard to obtain in specialized domains such as code-motivating growing interest in self-supervised retriever learning. Since LMs are trained to capture token-level dependencies through a self-supervised learning objective (i.e., next-token prediction), we can analogously cast retrieval as learning dependencies among chunks of tokens. This analogy naturally leads to the question: How can we adapt self-supervised learning objectives in the spirit of language modeling to train retrievers?To answer this question, we introduce Revela, a unified and scalable training framework for self-supervised retriever learning via language modeling. Revela models semantic dependencies among documents by conditioning next-token prediction on both local and cross-document context through an in-batch attention mechanism. This attention is weighted by retriever-computed similarity scores, enabling the retriever to be optimized as part of language modeling. We evaluate Revela on both general-domain (BEIR) and domain-specific (CoIR) benchmarks across various retriever backbones. At a comparable parameter scale, Revela outperforms the previous best method with absolute improvements of 5.2 % (18.3 % relative) and 5.6 % (14.4 % relative) on NDCG@10, respectively, underscoring its effectiveness. Performance increases with model size, highlighting both the scalability of our approach and its promise for self-supervised retriever learning.

View on arXiv
@article{cai2025_2506.16552,
  title={ Revela: Dense Retriever Learning via Language Modeling },
  author={ Fengyu Cai and Tong Chen and Xinran Zhao and Sihao Chen and Hongming Zhang and Sherry Tongshuang Wu and Iryna Gurevych and Heinz Koeppl },
  journal={arXiv preprint arXiv:2506.16552},
  year={ 2025 }
}
Comments on this paper