7
0

Leveraging LLM Inconsistency to Boost Pass@k Performance

Abstract

Large language models (LLMs) achieve impressive abilities in numerous domains, but exhibit inconsistent performance in response to minor input changes. Rather than view this as a drawback, in this paper we introduce a novel method for leveraging models' inconsistency to boost Pass@k performance. Specifically, we present a "Variator" agent that generates k variants of a given task and submits one candidate solution for each one. Our variant generation approach is applicable to a wide range of domains as it is task agnostic and compatible with free-form inputs. We demonstrate the efficacy of our agent theoretically using a probabilistic model of the inconsistency effect, and show empirically that it outperforms the baseline on the APPS dataset. Furthermore, we establish that inconsistency persists even in frontier reasoning models across coding and cybersecurity domains, suggesting our method is likely to remain relevant for future model generations.

View on arXiv
@article{dalal2025_2505.12938,
  title={ Leveraging LLM Inconsistency to Boost Pass@k Performance },
  author={ Uri Dalal and Meirav Segal and Zvika Ben-Haim and Dan Lahav and Omer Nevo },
  journal={arXiv preprint arXiv:2505.12938},
  year={ 2025 }
}
Comments on this paper