ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.18512
65
0

AcuRank: Uncertainty-Aware Adaptive Computation for Listwise Reranking

24 May 2025
Soyoung Yoon
Gyuwan Kim
Gyu-Hwung Cho
Seung-won Hwang
    BDLLRM
ArXiv (abs)PDFHTML
Main:9 Pages
4 Figures
Bibliography:3 Pages
13 Tables
Appendix:10 Pages
Abstract

Listwise reranking with large language models (LLMs) enhances top-ranked results in retrieval-based applications. Due to the limit in context size and high inference cost of long context, reranking is typically performed over a fixed size of small subsets, with the final ranking aggregated from these partial results. This fixed computation disregards query difficulty and document distribution, leading to inefficiencies. We propose AcuRank, an adaptive reranking framework that dynamically adjusts both the amount and target of computation based on uncertainty estimates over document relevance. Using a Bayesian TrueSkill model, we iteratively refine relevance estimates until reaching sufficient confidence levels, and our explicit modeling of ranking uncertainty enables principled control over reranking behavior and avoids unnecessary updates to confident predictions. Results on the TREC-DL and BEIR benchmarks show that our method consistently achieves a superior accuracy-efficiency trade-off and scales better with compute than fixed-computation baselines. These results highlight the effectiveness and generalizability of our method across diverse retrieval tasks and LLM-based reranking models.

View on arXiv
@article{yoon2025_2505.18512,
  title={ AcuRank: Uncertainty-Aware Adaptive Computation for Listwise Reranking },
  author={ Soyoung Yoon and Gyuwan Kim and Gyu-Hwung Cho and Seung-won Hwang },
  journal={arXiv preprint arXiv:2505.18512},
  year={ 2025 }
}
Comments on this paper