2
0

ELITE: Embedding-Less retrieval with Iterative Text Exploration

Abstract

Large Language Models (LLMs) have achieved impressive progress in natural language processing, but their limited ability to retain long-term context constrains performance on document-level or multi-turn tasks. Retrieval-Augmented Generation (RAG) mitigates this by retrieving relevant information from an external corpus. However, existing RAG systems often rely on embedding-based retrieval trained on corpus-level semantic similarity, which can lead to retrieving content that is semantically similar in form but misaligned with the question's true intent. Furthermore, recent RAG variants construct graph- or hierarchy-based structures to improve retrieval accuracy, resulting in significant computation and storage overhead. In this paper, we propose an embedding-free retrieval framework. Our method leverages the logical inferencing ability of LLMs in retrieval using iterative search space refinement guided by our novel importance measure and extend our retrieval results with logically related information without explicit graph construction. Experiments on long-context QA benchmarks, including NovelQA and Marathon, show that our approach outperforms strong baselines while reducing storage and runtime by over an order of magnitude.

View on arXiv
@article{wang2025_2505.11908,
  title={ ELITE: Embedding-Less retrieval with Iterative Text Exploration },
  author={ Zhangyu Wang and Siyuan Gao and Rong Zhou and Hao Wang and Li Ning },
  journal={arXiv preprint arXiv:2505.11908},
  year={ 2025 }
}
Comments on this paper