7
0

daDPO: Distribution-Aware DPO for Distilling Conversational Abilities

Main:8 Pages
6 Figures
Bibliography:3 Pages
11 Tables
Appendix:6 Pages
Abstract

Large language models (LLMs) have demonstrated exceptional performance across various applications, but their conversational abilities decline sharply as model size decreases, presenting a barrier to their deployment in resource-constrained environments. Knowledge distillation with Direct Preference Optimization (dDPO) has emerged as a promising approach to enhancing the conversational abilities of smaller models using a larger teacher model. However, current methods primarily focus on 'black-box' KD, which only uses the teacher's responses, overlooking the output distribution offered by the teacher. This paper addresses this gap by introducing daDPO (Distribution-Aware DPO), a unified method for preference optimization and distribution-based distillation. We provide rigorous theoretical analysis and empirical validation, showing that daDPO outperforms existing methods in restoring performance for pruned models and enhancing smaller LLM models. Notably, in in-domain evaluation, our method enables a 20% pruned Vicuna1.5-7B to achieve near-teacher performance (-7.3% preference rate compared to that of dDPO's -31%), and allows Qwen2.5-1.5B to occasionally outperform its 7B teacher model (14.0% win rate).

View on arXiv
@article{zhang2025_2506.15717,
  title={ daDPO: Distribution-Aware DPO for Distilling Conversational Abilities },
  author={ Zhengze Zhang and Shiqi Wang and Yiqun Shen and Simin Guo and Dahua Lin and Xiaoliang Wang and Nguyen Cam-Tu and Fei Tan },
  journal={arXiv preprint arXiv:2506.15717},
  year={ 2025 }
}
Comments on this paper