ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.04981
116
0

Better Semi-supervised Learning for Multi-domain ASR Through Incremental Retraining and Data Filtering

5 June 2025
Andres Carofilis
Pradeep Rangappa
S. Madikeri
Shashi Kumar
Sergio Burdisso
Jeena Prakash
Esaú Villatoro-Tello
P. Motlícek
Bidisha Sharma
Kadri Hacioğlu
Shankar Venkatesan
Saurabh Vyas
Andreas Stolcke
ArXiv (abs)PDFHTML
Abstract

Fine-tuning pretrained ASR models for specific domains is challenging when labeled data is scarce. But unlabeled audio and labeled data from related domains are often available. We propose an incremental semi-supervised learning pipeline that first integrates a small in-domain labeled set and an auxiliary dataset from a closely related domain, achieving a relative improvement of 4% over no auxiliary data. Filtering based on multi-model consensus or named entity recognition (NER) is then applied to select and iteratively refine pseudo-labels, showing slower performance saturation compared to random selection. Evaluated on the multi-domain Wow call center and Fisher English corpora, it outperforms single-step fine-tuning. Consensus-based filtering outperforms other methods, providing up to 22.3% relative improvement on Wow and 24.8% on Fisher over single-step fine-tuning with random selection. NER is the second-best filter, providing competitive performance at a lower computational cost.

View on arXiv
Main:4 Pages
2 Figures
Bibliography:1 Pages
2 Tables
Comments on this paper