2
0

CSC-SQL: Corrective Self-Consistency in Text-to-SQL via Reinforcement Learning

Abstract

Large language models (LLMs) have demonstrated strong capabilities in translating natural language questions about relational databases into SQL queries. In particular, test-time scaling techniques such as Self-Consistency and Self-Correction can enhance SQL generation accuracy by increasing computational effort during inference. However, these methods have notable limitations: Self-Consistency may select suboptimal outputs despite majority votes, while Self-Correction typically addresses only syntactic errors. To leverage the strengths of both approaches, we propose CSC-SQL, a novel method that integrates Self-Consistency and Self-Correction. CSC-SQL selects the two most frequently occurring outputs from parallel sampling and feeds them into a merge revision model for correction. Additionally, we employ the Group Relative Policy Optimization (GRPO) algorithm to fine-tune both the SQL generation and revision models via reinforcement learning, significantly enhancing output quality. Experimental results confirm the effectiveness and generalizability of CSC-SQL. On the BIRD development set, our 3B model achieves 65.28% execution accuracy, while the 7B model achieves 69.19%. The code will be open sourced atthis https URL.

View on arXiv
@article{sheng2025_2505.13271,
  title={ CSC-SQL: Corrective Self-Consistency in Text-to-SQL via Reinforcement Learning },
  author={ Lei Sheng and Shuai-Shuai Xu },
  journal={arXiv preprint arXiv:2505.13271},
  year={ 2025 }
}
Comments on this paper