78

V1V_1: Unifying Generation and Self-Verification for Parallel Reasoners

Harman Singh
Xiuyu Li
Kusha Sareen
Monishwaran Maheswaran
Sijun Tan
Xiaoxia Wu
Junxiong Wang
Alpay Ariyak
Qingyang Wu
Samir Khaki
Rishabh Tiwari
Long Lian
Yucheng Lu
Boyi Li
Alane Suhr
Ben Athiwaratkun
Kurt Keutzer
Main:13 Pages
18 Figures
Bibliography:5 Pages
2 Tables
Appendix:21 Pages
Abstract

Test-time scaling for complex reasoning tasks shows that leveraging inference-time compute, by methods such as independently sampling and aggregating multiple solutions, results in significantly better task outcomes. However, a critical bottleneck is verification: sampling is only effective if correct solutions can be reliably identified among candidates. While existing approaches typically evaluate candidates independently via scalar scoring, we demonstrate that models are substantially stronger at pairwise self-verification. Leveraging this insight, we introduce V1V_1, a framework that unifies generation and verification through efficient pairwise ranking. V1V_1 comprises two components: V1V_1-Infer, an uncertainty-guided algorithm using a tournament-based ranking that dynamically allocates self-verification compute to candidate pairs whose relative correctness is most uncertain; and V1V_1-PairRL, an RL framework that jointly trains a single model as both generator and pairwise self-verifier, ensuring the verifier adapts to the generator's evolving distribution. On code generation (LiveCodeBench, CodeContests, SWE-Bench) and math reasoning (AIME, HMMT) benchmarks, V1V_1-Infer improves Pass@1 by up to 1010% over pointwise verification and outperforms recent test-time scaling methods while being significantly more efficient. Furthermore, V1V_1-PairRL achieves 77--99% test-time scaling gains over standard RL and pointwise joint training, and improves base Pass@1 by up to 8.7% over standard RL in a code-generation setting.

View on arXiv
Comments on this paper