ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.13990
77
0
v1v2 (latest)

DecIF: Improving Instruction-Following through Meta-Decomposition

20 May 2025
Tingfeng Hui
Pengyu Zhu
Bowen Ping
Ling Tang
Guanting Dong
Yaqi Zhang
Sen Su
ArXiv (abs)PDFHTML
Main:7 Pages
3 Figures
Bibliography:3 Pages
9 Tables
Appendix:10 Pages
Abstract

Instruction-following has emerged as a crucial capability for large language models (LLMs). However, existing approaches often rely on pre-existing documents or external resources to synthesize instruction-following data, which limits their flexibility and generalizability. In this paper, we introduce DecIF, a fully autonomous, meta-decomposition guided framework that generates diverse and high-quality instruction-following data using only LLMs. DecIF is grounded in the principle of decomposition. For instruction generation, we guide LLMs to iteratively produce various types of meta-information, which are then combined with response constraints to form well-structured and semantically rich instructions. We further utilize LLMs to detect and resolve potential inconsistencies within the generated instructions. Regarding response generation, we decompose each instruction into atomic-level evaluation criteria, enabling rigorous validation and the elimination of inaccurate instruction-response pairs. Extensive experiments across a wide range of scenarios and settings demonstrate DecIF's superior performance on instruction-following tasks. Further analysis highlights its strong flexibility, scalability, and generalizability in automatically synthesizing high-quality instruction data.

View on arXiv
@article{hui2025_2505.13990,
  title={ DecIF: Improving Instruction-Following through Meta-Decomposition },
  author={ Tingfeng Hui and Pengyu Zhu and Bowen Ping and Ling Tang and Guanting Dong and Yaqi Zhang and Sen Su },
  journal={arXiv preprint arXiv:2505.13990},
  year={ 2025 }
}
Comments on this paper