ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20128
33
0

Iterative Self-Incentivization Empowers Large Language Models as Agentic Searchers

26 May 2025
Zhengliang Shi
Lingyong Yan
Dawei Yin
Suzan Verberne
Maarten de Rijke
Zhaochun Ren
    LRM
ArXivPDFHTML
Abstract

Large language models (LLMs) have been widely integrated into information retrieval to advance traditional techniques. However, effectively enabling LLMs to seek accurate knowledge in complex tasks remains a challenge due to the complexity of multi-hop queries as well as the irrelevant retrieved content. To address these limitations, we propose EXSEARCH, an agentic search framework, where the LLM learns to retrieve useful information as the reasoning unfolds through a self-incentivized process. At each step, the LLM decides what to retrieve (thinking), triggers an external retriever (search), and extracts fine-grained evidence (recording) to support next-step reasoning. To enable LLM with this capability, EXSEARCH adopts a Generalized Expectation-Maximization algorithm. In the E-step, the LLM generates multiple search trajectories and assigns an importance weight to each; the M-step trains the LLM on them with a re-weighted loss function. This creates a self-incentivized loop, where the LLM iteratively learns from its own generated data, progressively improving itself for search. We further theoretically analyze this training process, establishing convergence guarantees. Extensive experiments on four knowledge-intensive benchmarks show that EXSEARCH substantially outperforms baselines, e.g., +7.8% improvement on exact match score. Motivated by these promising results, we introduce EXSEARCH-Zoo, an extension that extends our method to broader scenarios, to facilitate future work.

View on arXiv
@article{shi2025_2505.20128,
  title={ Iterative Self-Incentivization Empowers Large Language Models as Agentic Searchers },
  author={ Zhengliang Shi and Lingyong Yan and Dawei Yin and Suzan Verberne and Maarten de Rijke and Zhaochun Ren },
  journal={arXiv preprint arXiv:2505.20128},
  year={ 2025 }
}
Comments on this paper