Despite the progress in self-supervised learning (SSL) for speech and music, existing models treat these domains separately, limiting their capacity for unified audio understanding. A unified model is desirable for applications that require general representations, e.g. audio large language models. Nonetheless, directly training a general model for speech and music is computationally expensive. Knowledge Distillation of teacher ensembles may be a natural solution, but we posit that decoupling the distillation of the speech and music SSL models allows for more flexibility. Thus, we propose to learn distilled task vectors and then linearly interpolate them to form a unified speech+music model. This strategy enables flexible domain emphasis through adjustable weights and is also simpler to train. Experiments on speech and music benchmarks demonstrate that our method yields superior overall performance compared to ensemble distillation.
View on arXiv@article{ritter-gutierrez2025_2505.13270, title={ Distilling a speech and music encoder with task arithmetic }, author={ Fabian Ritter-Gutierrez and Yi-Cheng Lin and Jui-Chiang Wei and Jeremy H.M Wong and Eng Siong Chng and Nancy F. Chen and Hung-yi Lee }, journal={arXiv preprint arXiv:2505.13270}, year={ 2025 } }