ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.12759
88
2

RAG-RL: Advancing Retrieval-Augmented Generation via RL and Curriculum Learning

17 March 2025
Jerry Huang
Siddarth Madala
Risham Sidhu
Cheng Niu
Julia Hockenmaier
Tong Zhang
    RALM
    LRM
ArXivPDFHTML
Abstract

Recent research highlights the challenges retrieval models face in retrieving useful contexts and the limitations of generation models in effectively utilizing those contexts in retrieval-augmented generation (RAG) settings. To address these challenges, we introduce RAG-RL, the first reasoning language model (RLM) specifically trained for RAG. RAG-RL demonstrates that stronger answer generation models can identify relevant contexts within larger sets of retrieved information -- thereby alleviating the burden on retrievers -- while also being able to utilize those contexts more effectively. Moreover, we show that curriculum design in the reinforcement learning (RL) post-training process is a powerful approach to enhancing model performance. We benchmark our method on two open-domain question-answering datasets and achieve state-of-the-art results, surpassing previous SOTA generative reader models. In addition, we offers empirical insights into various curriculum learning strategies, providing a deeper understanding of their impact on model performance.

View on arXiv
@article{huang2025_2503.12759,
  title={ RAG-RL: Advancing Retrieval-Augmented Generation via RL and Curriculum Learning },
  author={ Jerry Huang and Siddarth Madala and Risham Sidhu and Cheng Niu and Julia Hockenmaier and Tong Zhang },
  journal={arXiv preprint arXiv:2503.12759},
  year={ 2025 }
}
Comments on this paper