ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.13270
8
0

Distilling a speech and music encoder with task arithmetic

19 May 2025
Fabian Ritter-Gutierrez
Yi-Cheng Lin
Jui-Chiang Wei
Jeremy H.M Wong
Eng Siong Chng
Nancy F. Chen
Hung-yi Lee
ArXivPDFHTML
Abstract

Despite the progress in self-supervised learning (SSL) for speech and music, existing models treat these domains separately, limiting their capacity for unified audio understanding. A unified model is desirable for applications that require general representations, e.g. audio large language models. Nonetheless, directly training a general model for speech and music is computationally expensive. Knowledge Distillation of teacher ensembles may be a natural solution, but we posit that decoupling the distillation of the speech and music SSL models allows for more flexibility. Thus, we propose to learn distilled task vectors and then linearly interpolate them to form a unified speech+music model. This strategy enables flexible domain emphasis through adjustable weights and is also simpler to train. Experiments on speech and music benchmarks demonstrate that our method yields superior overall performance compared to ensemble distillation.

View on arXiv
@article{ritter-gutierrez2025_2505.13270,
  title={ Distilling a speech and music encoder with task arithmetic },
  author={ Fabian Ritter-Gutierrez and Yi-Cheng Lin and Jui-Chiang Wei and Jeremy H.M Wong and Eng Siong Chng and Nancy F. Chen and Hung-yi Lee },
  journal={arXiv preprint arXiv:2505.13270},
  year={ 2025 }
}
Comments on this paper