ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.12619
91
2

Quantification of Large Language Model Distillation

22 January 2025
Sunbowen Lee
Junting Zhou
Chang Ao
Kaige Li
Xinrun Du
Sirui He
Jiaheng Liu
Tianci Liu
Jiaheng Liu
Hamid Alinejad-Rokny
Min Yang
Yitao Liang
Zhoufutu Wen
Shiwen Ni
ArXivPDFHTML
Abstract

Model distillation is a fundamental technique in building large language models (LLMs), transferring knowledge from a teacher model to a student model. However, distillation can lead to model homogenization, reducing diversity among models and impairing their ability to robustly handle complex or novel tasks. These limitations underscore the need to systematically quantify the distillation process and its impact. In this work, we propose a framework to evaluate and quantify model distillation. Our method addresses two key aspects: (1) Identifying identity cognition contradictions to assess discrepancies in how models perceive and represent identity-related information, and (2) Analyzing multi-granularity response similarities across models to measure the extent of homogenization. Experimental results demonstrate two key insights: (1) Well-known closed-source and open-source LLMs usually exhibit high distillation degrees, except for Claude, Doubao, and Gemini. (2) Base LLMs show higher distillation degrees compared to aligned LLMs. By offering a systematic approach to improve the transparency of LLM data distillation, we call for LLMs with more independent development and more transparent technical reports to improve LLMs' robustness and safety. The code and data are available underthis https URL.

View on arXiv
@article{lee2025_2501.12619,
  title={ Quantification of Large Language Model Distillation },
  author={ Sunbowen Lee and Junting Zhou and Chang Ao and Kaige Li and Xinrun Du and Sirui He and Haihong Wu and Tianci Liu and Jiaheng Liu and Hamid Alinejad-Rokny and Min Yang and Yitao Liang and Zhoufutu Wen and Shiwen Ni },
  journal={arXiv preprint arXiv:2501.12619},
  year={ 2025 }
}
Comments on this paper