ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.24643
32
0

Are Optimal Algorithms Still Optimal? Rethinking Sorting in LLM-Based Pairwise Ranking with Batching and Caching

30 May 2025
Juan Wisznia
Cecilia Bolaños
Juan Tollo
Giovanni Franco Gabriel Marraffini
Agustín Gianolini
Noe Fabian Hsueh
Luciano Del Corro
ArXiv (abs)PDFHTML
Main:4 Pages
6 Figures
Bibliography:2 Pages
4 Tables
Appendix:3 Pages
Abstract

We introduce a novel framework for analyzing sorting algorithms in pairwise ranking prompting (PRP), re-centering the cost model around LLM inferences rather than traditional pairwise comparisons. While classical metrics based on comparison counts have traditionally been used to gauge efficiency, our analysis reveals that expensive LLM inferences overturn these predictions; accordingly, our framework encourages strategies such as batching and caching to mitigate inference costs. We show that algorithms optimal in the classical setting can lose efficiency when LLM inferences dominate the cost under certain optimizations.

View on arXiv
@article{wisznia2025_2505.24643,
  title={ Are Optimal Algorithms Still Optimal? Rethinking Sorting in LLM-Based Pairwise Ranking with Batching and Caching },
  author={ Juan Wisznia and Cecilia Bolaños and Juan Tollo and Giovanni Marraffini and Agustín Gianolini and Noe Hsueh and Luciano Del Corro },
  journal={arXiv preprint arXiv:2505.24643},
  year={ 2025 }
}
Comments on this paper