ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.07591
41
0

A Multi-Dimensional Constraint Framework for Evaluating and Improving Instruction Following in Large Language Models

12 May 2025
Junjie Ye
Caishuang Huang
Zhe Chen
Wenjie Fu
Chenyuan Yang
L. Yang
Yilong Wu
P. Wang
Meng Zhou
X. J. Yang
Tao Gui
Qi Zhang
Zhongchao Shi
Jianping Fan
Xuanjing Huang
    ALM
ArXivPDFHTML
Abstract

Instruction following evaluates large language models (LLMs) on their ability to generate outputs that adhere to user-defined constraints. However, existing benchmarks often rely on templated constraint prompts, which lack the diversity of real-world usage and limit fine-grained performance assessment. To fill this gap, we propose a multi-dimensional constraint framework encompassing three constraint patterns, four constraint categories, and four difficulty levels. Building on this framework, we develop an automated instruction generation pipeline that performs constraint expansion, conflict detection, and instruction rewriting, yielding 1,200 code-verifiable instruction-following test samples. We evaluate 19 LLMs across seven model families and uncover substantial variation in performance across constraint forms. For instance, average performance drops from 77.67% at Level I to 32.96% at Level IV. Furthermore, we demonstrate the utility of our approach by using it to generate data for reinforcement learning, achieving substantial gains in instruction following without degrading general performance. In-depth analysis indicates that these gains stem primarily from modifications in the model's attention modules parameters, which enhance constraint recognition and adherence. Code and data are available inthis https URL.

View on arXiv
@article{ye2025_2505.07591,
  title={ A Multi-Dimensional Constraint Framework for Evaluating and Improving Instruction Following in Large Language Models },
  author={ Junjie Ye and Caishuang Huang and Zhuohan Chen and Wenjie Fu and Chenyuan Yang and Leyi Yang and Yilong Wu and Peng Wang and Meng Zhou and Xiaolong Yang and Tao Gui and Qi Zhang and Zhongchao Shi and Jianping Fan and Xuanjing Huang },
  journal={arXiv preprint arXiv:2505.07591},
  year={ 2025 }
}
Comments on this paper