ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.11721
62
2
v1v2 (latest)

Zero-Shot Generalization during Instruction Tuning: Insights from Similarity and Granularity

17 June 2024
Bingxiang He
Ning Ding
Cheng Qian
Jia Deng
Ganqu Cui
Lifan Yuan
Huan-ang Gao
Huimin Chen
Zhiyuan Liu
Maosong Sun
ArXiv (abs)PDFHTML
Abstract

Understanding alignment techniques begins with comprehending zero-shot generalization brought by instruction tuning, but little of the mechanism has been understood. Existing work has largely been confined to the task level, without considering that tasks are artificially defined and, to LLMs, merely consist of tokens and representations. This line of research has been limited to examining transfer between tasks from a task-pair perspective, with few studies focusing on understanding zero-shot generalization from the perspective of the data itself. To bridge this gap, we first demonstrate through multiple metrics that zero-shot generalization during instruction tuning happens very early. Next, we investigate the facilitation of zero-shot generalization from both data similarity and granularity perspectives, confirming that encountering highly similar and fine-grained training data earlier during instruction tuning, without the constraints of defined "tasks", enables better generalization. Finally, we propose a more grounded training data arrangement method, Test-centric Multi-turn Arrangement, and show its effectiveness in promoting continual learning and further loss reduction. For the first time, we show that zero-shot generalization during instruction tuning is a form of similarity-based generalization between training and test data at the instance level. We hope our analysis will advance the understanding of zero-shot generalization during instruction tuning and contribute to the development of more aligned LLMs. Our code is released at this https URL.

View on arXiv
@article{he2025_2406.11721,
  title={ The Right Time Matters: Data Arrangement Affects Zero-Shot Generalization in Instruction Tuning },
  author={ Bingxiang He and Ning Ding and Cheng Qian and Jia Deng and Ganqu Cui and Lifan Yuan and Haiwen Hong and Huan-ang Gao and Longtao Huang and Hui Xue and Huimin Chen and Zhiyuan Liu and Maosong Sun },
  journal={arXiv preprint arXiv:2406.11721},
  year={ 2025 }
}
Comments on this paper