ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.01448
41
0

LLM-VPRF: Large Language Model Based Vector Pseudo Relevance Feedback

2 April 2025
Hang Li
Shengyao Zhuang
Bevan Koopman
Guido Zuccon
    VLM
ArXivPDFHTML
Abstract

Vector Pseudo Relevance Feedback (VPRF) has shown promising results in improving BERT-based dense retrieval systems through iterative refinement of query representations. This paper investigates the generalizability of VPRF to Large Language Model (LLM) based dense retrievers. We introduce LLM-VPRF and evaluate its effectiveness across multiple benchmark datasets, analyzing how different LLMs impact the feedback mechanism. Our results demonstrate that VPRF's benefits successfully extend to LLM architectures, establishing it as a robust technique for enhancing dense retrieval performance regardless of the underlying models. This work bridges the gap between VPRF with traditional BERT-based dense retrievers and modern LLMs, while providing insights into their future directions.

View on arXiv
@article{li2025_2504.01448,
  title={ LLM-VPRF: Large Language Model Based Vector Pseudo Relevance Feedback },
  author={ Hang Li and Shengyao Zhuang and Bevan Koopman and Guido Zuccon },
  journal={arXiv preprint arXiv:2504.01448},
  year={ 2025 }
}
Comments on this paper