ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.02327
35
0

LearNAT: Learning NL2SQL with AST-guided Task Decomposition for Large Language Models

3 April 2025
Weibin Liao
Xin Gao
Tianyu Jia
Rihong Qiu
Yifan Zhu
Yang Lin
Xu Chu
Junfeng Zhao
Yasha Wang
ArXivPDFHTML
Abstract

Natural Language to SQL (NL2SQL) has emerged as a critical task for enabling seamless interaction with databases. Recent advancements in Large Language Models (LLMs) have demonstrated remarkable performance in this domain. However, existing NL2SQL methods predominantly rely on closed-source LLMs leveraging prompt engineering, while open-source models typically require fine-tuning to acquire domain-specific knowledge. Despite these efforts, open-source LLMs struggle with complex NL2SQL tasks due to the indirect expression of user query objectives and the semantic gap between user queries and database schemas. Inspired by the application of reinforcement learning in mathematical problem-solving to encourage step-by-step reasoning in LLMs, we propose LearNAT (Learning NL2SQL with AST-guided Task Decomposition), a novel framework that improves the performance of open-source LLMs on complex NL2SQL tasks through task decomposition and reinforcement learning. LearNAT introduces three key components: (1) a Decomposition Synthesis Procedure that leverages Abstract Syntax Trees (ASTs) to guide efficient search and pruning strategies for task decomposition, (2) Margin-aware Reinforcement Learning, which employs fine-grained step-level optimization via DPO with AST margins, and (3) Adaptive Demonstration Reasoning, a mechanism for dynamically selecting relevant examples to enhance decomposition capabilities. Extensive experiments on two benchmark datasets, Spider and BIRD, demonstrate that LearNAT enables a 7B-parameter open-source LLM to achieve performance comparable to GPT-4, while offering improved efficiency and accessibility.

View on arXiv
@article{liao2025_2504.02327,
  title={ LearNAT: Learning NL2SQL with AST-guided Task Decomposition for Large Language Models },
  author={ Weibin Liao and Xin Gao and Tianyu Jia and Rihong Qiu and Yifan Zhu and Yang Lin and Xu Chu and Junfeng Zhao and Yasha Wang },
  journal={arXiv preprint arXiv:2504.02327},
  year={ 2025 }
}
Comments on this paper