9
0

RAD: Redundancy-Aware Distillation for Hybrid Models via Self-Speculative Decoding

Abstract

Hybrid models combining Transformers and State Space Models (SSMs) are promising for balancing performance and efficiency. However, optimizing these hybrid models, particularly by addressing the potential redundancy inherent within the Transformer components, remains a significant challenge. In this paper, we propose RAD (Redundancy-Aware Distillation), a novel framework that uses self-speculative decoding as a diagnostic tool to identify redundant attention layers within the model. These identified layers are then selectively replaced with SSM components, followed by targeted (self-)distillation. Specifically, RAD focuses knowledge transfer on the components identified as redundant, considering architectural changes and specific weight initialization strategies. We experimentally demonstrate that self-distillation using RAD significantly surpasses the performance of the original base model on mathematical and coding tasks. Furthermore, RAD is also effective in standard knowledge distillation settings, achieving up to approximately 2x faster convergence compared to baseline methods. Notably, while a baseline model distilled from a Llama-3.1 70B teacher achieves scores of 46.17 on GSM8K and 22.75 on CRUX, RAD achieves significantly higher scores of 71.27 on GSM8K and 28.25 on CRUX, even when using a much smaller Llama-3.1 8B teacher. RAD offers a new pathway for efficient optimization and performance enhancement in the distillation of hybrid models.

View on arXiv
@article{hoshino2025_2505.22135,
  title={ RAD: Redundancy-Aware Distillation for Hybrid Models via Self-Speculative Decoding },
  author={ Yuichiro Hoshino and Hideyuki Tachibana and Muneyoshi Inahara and Hiroto Takegawa },
  journal={arXiv preprint arXiv:2505.22135},
  year={ 2025 }
}
Comments on this paper