Large language models (LLMs) are increasingly expected to tackle complex tasks, driven by their expanding applications and users' growing proficiency in crafting sophisticated prompts. However, as the number of explicitly stated requirements increases (particularly more than 10 constraints), LLMs often struggle to accurately follow such complex instructions. To address this challenge, we propose RECAST, a novel framework for synthesizing datasets where each example incorporates far more constraints than those in existing benchmarks. These constraints are extracted from real-world prompt-response pairs to ensure practical relevance. RECAST enables automatic verification of constraint satisfaction via rule-based validators for quantitative constraints and LLM-based validators for qualitative ones. Using this framework, we construct RECAST-30K, a large-scale, high-quality dataset comprising 30k instances spanning 15 constraint types. Experimental results demonstrate that models fine-tuned on RECAST-30K show substantial improvements in following complex instructions. Moreover, the verifiability provided by RECAST enables the design of reward functions for reinforcement learning, which further boosts model performance on complex and challenging tasks.
View on arXiv@article{liu2025_2505.19030, title={ RECAST: Strengthening LLMs' Complex Instruction Following with Constraint-Verifiable Data }, author={ Wenhao Liu and Zhengkang Guo and Mingchen Xie and Jingwen Xu and Zisu Huang and Muzhao Tian and Jianhan Xu and Muling Wu and Xiaohua Wang and Changze Lv and He-Da Wang and Hu Yao and Xiaoqing Zheng and Xuanjing Huang }, journal={arXiv preprint arXiv:2505.19030}, year={ 2025 } }