5
0

Effective Context in Neural Speech Models

Abstract

Modern neural speech models benefit from having longer context, and many approaches have been proposed to increase the maximum context a model can use. However, few have attempted to measure how much context these models actually use, i.e., the effective context. Here, we propose two approaches to measuring the effective context, and use them to analyze different speech Transformers. For supervised models, we find that the effective context correlates well with the nature of the task, with fundamental frequency tracking, phone classification, and word classification requiring increasing amounts of effective context. For self-supervised models, we find that effective context increases mainly in the early layers, and remains relatively short -- similar to the supervised phone model. Given that these models do not use a long context during prediction, we show that HuBERT can be run in streaming mode without modification to the architecture and without further fine-tuning.

View on arXiv
@article{meng2025_2505.22487,
  title={ Effective Context in Neural Speech Models },
  author={ Yen Meng and Sharon Goldwater and Hao Tang },
  journal={arXiv preprint arXiv:2505.22487},
  year={ 2025 }
}
Comments on this paper