5
0

Differentiable K-means for Fully-optimized Discrete Token-based ASR

Abstract

Recent studies have highlighted the potential of discrete tokens derived from self-supervised learning (SSL) models for various speech-related tasks. These tokens serve not only as substitutes for text in language modeling but also as intermediate representations for tasks such as automatic speech recognition (ASR). However, discrete tokens are typically obtained via k-means clustering of SSL features independently of downstream tasks, making them suboptimal for specific applications. This paper proposes the use of differentiable k-means, enabling the joint optimization of tokenization and downstream tasks. This approach enables the fine-tuning of the SSL parameters and learning weights for outputs from multiple SSL layers. Experiments were conducted with ASR as a downstream task. ASR accuracy successfully improved owing to the optimized tokens. The acquired tokens also exhibited greater purity of phonetic information, which were found to be useful even in speech resynthesis.

View on arXiv
@article{onda2025_2505.16207,
  title={ Differentiable K-means for Fully-optimized Discrete Token-based ASR },
  author={ Kentaro Onda and Yosuke Kashiwagi and Emiru Tsunoo and Hayato Futami and Shinji Watanabe },
  journal={arXiv preprint arXiv:2505.16207},
  year={ 2025 }
}
Comments on this paper