22
0

Dynamic Context-Aware Streaming Pretrained Language Model For Inverse Text Normalization

Main:4 Pages
1 Figures
Bibliography:1 Pages
5 Tables
Abstract

Inverse Text Normalization (ITN) is crucial for converting spoken Automatic Speech Recognition (ASR) outputs into well-formatted written text, enhancing both readability and usability. Despite its importance, the integration of streaming ITN within streaming ASR remains largely unexplored due to challenges in accuracy, efficiency, and adaptability, particularly in low-resource and limited-context scenarios. In this paper, we introduce a streaming pretrained language model for ITN, leveraging pretrained linguistic representations for improved robustness. To address streaming constraints, we propose Dynamic Context-Aware during training and inference, enabling adaptive chunk size adjustments and the integration of right-context information. Experimental results demonstrate that our method achieves accuracy comparable to non-streaming ITN and surpasses existing streaming ITN models on a Vietnamese dataset, all while maintaining low latency, ensuring seamless integration into ASR systems.

View on arXiv
@article{ho2025_2505.24229,
  title={ Dynamic Context-Aware Streaming Pretrained Language Model For Inverse Text Normalization },
  author={ Luong Ho and Khanh Le and Vinh Pham and Bao Nguyen and Tan Tran and Duc Chau },
  journal={arXiv preprint arXiv:2505.24229},
  year={ 2025 }
}
Comments on this paper