12
0

Ranked Voting based Self-Consistency of Large Language Models

Abstract

Majority voting is considered an effective method to enhance chain-of-thought reasoning, as it selects the answer with the highest "self-consistency" among different reasoning paths (Wang et al., 2023). However, previous chain-of-thought reasoning methods typically generate only a single answer in each trial, thereby ignoring the possibility of other potential answers. As a result, these alternative answers are often overlooked in subsequent voting processes. In this work, we propose to generate ranked answers in each reasoning process and conduct ranked voting among multiple ranked answers from different responses, thereby making the overall self-consistency more reliable. Specifically, we use three ranked voting methods: Instant-runoff voting, Borda count voting, and mean reciprocal rank voting. We validate our methods on six datasets, including three multiple-choice and three open-ended question-answering tasks, using both advanced open-source and closed-source large language models. Extensive experimental results indicate that our proposed method outperforms the baselines, showcasing the potential of leveraging the information of ranked answers and using ranked voting to improve reasoning performance. The code is available atthis https URL.

View on arXiv
@article{wang2025_2505.10772,
  title={ Ranked Voting based Self-Consistency of Large Language Models },
  author={ Weiqin Wang and Yile Wang and Hui Huang },
  journal={arXiv preprint arXiv:2505.10772},
  year={ 2025 }
}
Comments on this paper