ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19030
42
0

RECAST: Strengthening LLMs' Complex Instruction Following with Constraint-Verifiable Data

25 May 2025
Wenhao Liu
Zhengkang Guo
Mingchen Xie
Jingwen Xu
Zisu Huang
Muzhao Tian
Jianhan Xu
Muling Wu
Xiaohua Wang
Changze Lv
He-Da Wang
Hu Yao
Xiaoqing Zheng
Xuanjing Huang
ArXivPDFHTML
Abstract

Large language models (LLMs) are increasingly expected to tackle complex tasks, driven by their expanding applications and users' growing proficiency in crafting sophisticated prompts. However, as the number of explicitly stated requirements increases (particularly more than 10 constraints), LLMs often struggle to accurately follow such complex instructions. To address this challenge, we propose RECAST, a novel framework for synthesizing datasets where each example incorporates far more constraints than those in existing benchmarks. These constraints are extracted from real-world prompt-response pairs to ensure practical relevance. RECAST enables automatic verification of constraint satisfaction via rule-based validators for quantitative constraints and LLM-based validators for qualitative ones. Using this framework, we construct RECAST-30K, a large-scale, high-quality dataset comprising 30k instances spanning 15 constraint types. Experimental results demonstrate that models fine-tuned on RECAST-30K show substantial improvements in following complex instructions. Moreover, the verifiability provided by RECAST enables the design of reward functions for reinforcement learning, which further boosts model performance on complex and challenging tasks.

View on arXiv
@article{liu2025_2505.19030,
  title={ RECAST: Strengthening LLMs' Complex Instruction Following with Constraint-Verifiable Data },
  author={ Wenhao Liu and Zhengkang Guo and Mingchen Xie and Jingwen Xu and Zisu Huang and Muzhao Tian and Jianhan Xu and Muling Wu and Xiaohua Wang and Changze Lv and He-Da Wang and Hu Yao and Xiaoqing Zheng and Xuanjing Huang },
  journal={arXiv preprint arXiv:2505.19030},
  year={ 2025 }
}
Comments on this paper