ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.09982
121
2
v1v2v3v4 (latest)

Self-Data Distillation for Recovering Quality in Pruned Large Language Models

13 October 2024
Vithursan Thangarasa
Ganesh Venkatesh
Mike Lasby
Nish Sinnadurai
    SyDa
ArXiv (abs)PDFHTML
Main:10 Pages
4 Figures
Bibliography:4 Pages
8 Tables
Appendix:5 Pages
Abstract

Large language models have driven significant progress in natural language processing, but their deployment requires substantial compute and memory resources. As models scale, compression techniques become essential for balancing model quality with computational efficiency. Structured pruning, which removes less critical components of the model, is a promising strategy for reducing complexity. However, one-shot pruning often results in significant quality degradation, particularly in tasks requiring multi-step reasoning. To recover lost quality, supervised fine-tuning (SFT) is commonly applied, but it can lead to catastrophic forgetting by shifting the model's learned data distribution. Therefore, addressing the degradation from both pruning and SFT is essential to preserve the original model's quality. In this work, we propose self-data distilled fine-tuning to address these challenges. Our approach leverages the original, unpruned model to generate a distilled dataset that preserves semantic richness and mitigates catastrophic forgetting by maintaining alignment with the base model's knowledge. Empirically, we demonstrate that self-data distillation consistently outperforms standard SFT, improving average accuracy by up to 8% on the HuggingFace OpenLLM Leaderboard v1. Specifically, when pruning 6 decoder blocks on Llama3.1-8B Instruct (i.e., 32 to 26 layers, reducing the model size from 8.03B to 6.72B parameters), our method retains 91.2% of the original model's accuracy compared to 81.7% with SFT, while reducing real-world FLOPs by 16.30%. Furthermore, our approach scales effectively across datasets, with the quality improving as the dataset size increases.

View on arXiv
@article{thangarasa2025_2410.09982,
  title={ Self-Data Distillation for Recovering Quality in Pruned Large Language Models },
  author={ Vithursan Thangarasa and Ganesh Venkatesh and Mike Lasby and Nish Sinnadurai and Sean Lie },
  journal={arXiv preprint arXiv:2410.09982},
  year={ 2025 }
}
Comments on this paper