32
0

Reinforced Informativeness Optimization for Long-Form Retrieval-Augmented Generation

Abstract

Long-form question answering (LFQA) presents unique challenges for large language models, requiring the synthesis of coherent, paragraph-length answers. While retrieval-augmented generation (RAG) systems have emerged as a promising solution, existing research struggles with key limitations: the scarcity of high-quality training data for long-form generation, the compounding risk of hallucination in extended outputs, and the absence of reliable evaluation metrics for factual completeness. In this paper, we propose RioRAG, a novel reinforcement learning (RL) framework that advances long-form RAG through reinforced informativeness optimization. Our approach introduces two fundamental innovations to address the core challenges. First, we develop an RL training paradigm of reinforced informativeness optimization that directly optimizes informativeness and effectively addresses the slow-thinking deficit in conventional RAG systems, bypassing the need for expensive supervised data. Second, we propose a nugget-centric hierarchical reward modeling approach that enables precise assessment of long-form answers through a three-stage process: extracting the nugget from every source webpage, constructing a nugget claim checklist, and computing rewards based on factual alignment. Extensive experiments on two LFQA benchmarks LongFact and RAGChecker demonstrate the effectiveness of the proposed method. Our codes are available atthis https URL.

View on arXiv
@article{wang2025_2505.20825,
  title={ Reinforced Informativeness Optimization for Long-Form Retrieval-Augmented Generation },
  author={ Yuhao Wang and Ruiyang Ren and Yucheng Wang and Wayne Xin Zhao and Jing Liu and Hua Wu and Haifeng Wang },
  journal={arXiv preprint arXiv:2505.20825},
  year={ 2025 }
}
Comments on this paper