ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.13180
9
0
v1v2 (latest)

Dynamic Acoustic Model Architecture Optimization in Training for ASR

16 June 2025
Jingjing Xu
Zijian Yang
Albert Zeyer
Eugen Beck
Ralf Schlueter
Hermann Ney
ArXiv (abs)PDFHTML
Main:4 Pages
4 Figures
Bibliography:1 Pages
5 Tables
Abstract

Architecture design is inherently complex. Existing approaches rely on either handcrafted rules, which demand extensive empirical expertise, or automated methods like neural architecture search, which are computationally intensive. In this paper, we introduce DMAO, an architecture optimization framework that employs a grow-and-drop strategy to automatically reallocate parameters during training. This reallocation shifts resources from less-utilized areas to those parts of the model where they are most beneficial. Notably, DMAO only introduces negligible training overhead at a given model complexity. We evaluate DMAO through experiments with CTC on LibriSpeech, TED-LIUM-v2 and Switchboard datasets. The results show that, using the same amount of training resources, our proposed DMAO consistently improves WER by up to 6% relatively across various architectures, model sizes, and datasets. Furthermore, we analyze the pattern of parameter redistribution and uncover insightful findings.

View on arXiv
@article{xu2025_2506.13180,
  title={ Dynamic Acoustic Model Architecture Optimization in Training for ASR },
  author={ Jingjing Xu and Zijian Yang and Albert Zeyer and Eugen Beck and Ralf Schlueter and Hermann Ney },
  journal={arXiv preprint arXiv:2506.13180},
  year={ 2025 }
}
Comments on this paper