Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2401.11700
Cited By
Keep Decoding Parallel with Effective Knowledge Distillation from Language Models to End-to-end Speech Recognisers
22 January 2024
Michael Hentschel
Yuta Nishikawa
Tatsuya Komatsu
Yusuke Fujita
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Keep Decoding Parallel with Effective Knowledge Distillation from Language Models to End-to-end Speech Recognisers"
5 / 5 papers shown
Title
CR-CTC: Consistency regularization on CTC for improved speech recognition
Zengwei Yao
Wei Kang
Xiaoyu Yang
Fangjun Kuang
Liyong Guo
Han Zhu
Zengrui Jin
Zhaoqing Li
Long Lin
Daniel Povey
53
0
0
17 Feb 2025
BERT Meets CTC: New Formulation of End-to-End Speech Recognition with Pre-trained Masked Language Model
Yosuke Higuchi
Brian Yan
Siddhant Arora
Tetsuji Ogawa
Tetsunori Kobayashi
Shinji Watanabe
54
25
0
29 Oct 2022
Intermediate Loss Regularization for CTC-based Speech Recognition
Jaesong Lee
Shinji Watanabe
118
135
0
05 Feb 2021
Internal Language Model Training for Domain-Adaptive End-to-End Speech Recognition
Zhong Meng
Naoyuki Kanda
Yashesh Gaur
S. Parthasarathy
Eric Sun
Liang Lu
Xie Chen
Jinyu Li
Jiawei Liu
AuLLM
33
52
0
02 Feb 2021
NeMo: a toolkit for building AI applications using Neural Modules
Oleksii Kuchaiev
Jason Chun Lok Li
Huyen Nguyen
Oleksii Hrinchuk
Ryan Leary
...
Jack Cook
P. Castonguay
Mariya Popova
Jocelyn Huang
Jonathan M. Cohen
211
292
0
14 Sep 2019
1