ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.11081
10
0

SAGE:Specification-Aware Grammar Extraction for Automated Test Case Generation with LLMs

4 June 2025
Aditi
Hyunwoo Park
Sicheol Sung
Yo-Sub Han
Sang-Ki Ko
ArXiv (abs)PDFHTML
Main:10 Pages
4 Figures
Bibliography:2 Pages
Abstract

Grammar-based test case generation has proven effective for competitive programming problems, but generating valid and general grammars from natural language specifications remains a key challenge, especially under limited supervision. Context-Free Grammars with Counters (CCFGs) have recently been introduced as a formalism to represent such specifications with logical constraints by storing and reusing counter values during derivation. In this work, we explore the use of open-source large language models (LLMs) to induce CCFGs from specifications using a small number of labeled examples and verifiable reward-guided reinforcement learning. Our approach first fine-tunes an open-source LLM to perform specification-to-grammar translation, and further applies Group Relative Policy Optimization (GRPO) to enhance grammar validity and generality. We also examine the effectiveness of iterative feedback for open and closed-source LLMs in correcting syntactic and semantic errors in generated grammars.Experimental results show that our approach SAGE achieves stronger generalization and outperforms 17 open and closed-source LLMs in both grammar quality and test effectiveness, improving over the state-of-the-art by 15.92%p in grammar validity and 12.34%p in test effectiveness. We provide our implementation and dataset at the following anonymous repository:this https URL

View on arXiv
@article{aditi2025_2506.11081,
  title={ SAGE:Specification-Aware Grammar Extraction for Automated Test Case Generation with LLMs },
  author={ Aditi and Hyunwoo Park and Sicheol Sung and Yo-Sub Han and Sang-Ki Ko },
  journal={arXiv preprint arXiv:2506.11081},
  year={ 2025 }
}
Comments on this paper