We introduce a novel framework for analyzing sorting algorithms in pairwise ranking prompting (PRP), re-centering the cost model around LLM inferences rather than traditional pairwise comparisons. While classical metrics based on comparison counts have traditionally been used to gauge efficiency, our analysis reveals that expensive LLM inferences overturn these predictions; accordingly, our framework encourages strategies such as batching and caching to mitigate inference costs. We show that algorithms optimal in the classical setting can lose efficiency when LLM inferences dominate the cost under certain optimizations.
View on arXiv@article{wisznia2025_2505.24643, title={ Are Optimal Algorithms Still Optimal? Rethinking Sorting in LLM-Based Pairwise Ranking with Batching and Caching }, author={ Juan Wisznia and Cecilia Bolaños and Juan Tollo and Giovanni Marraffini and Agustín Gianolini and Noe Hsueh and Luciano Del Corro }, journal={arXiv preprint arXiv:2505.24643}, year={ 2025 } }