ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.07327
  4. Cited By
Improving Streaming Transformer Based ASR Under a Framework of
  Self-supervised Learning

Improving Streaming Transformer Based ASR Under a Framework of Self-supervised Learning

15 September 2021
Songjun Cao
Yueteng Kang
Yanzhe Fu
Xiaoshuo Xu
Sining Sun
Yike Zhang
Long Ma
ArXivPDFHTML

Papers citing "Improving Streaming Transformer Based ASR Under a Framework of Self-supervised Learning"

13 / 13 papers shown
Title
Self-training and Pre-training are Complementary for Speech Recognition
Self-training and Pre-training are Complementary for Speech Recognition
Qiantong Xu
Alexei Baevski
Tatiana Likhomanenko
Paden Tomasello
Alexis Conneau
R. Collobert
Gabriel Synnaeve
Michael Auli
SSL
VLM
119
171
0
22 Oct 2020
Developing Real-time Streaming Transformer Transducer for Speech
  Recognition on Large-scale Dataset
Developing Real-time Streaming Transformer Transducer for Speech Recognition on Large-scale Dataset
Xie Chen
Yu-Huan Wu
Zhenghao Wang
Shujie Liu
Jinyu Li
106
174
0
22 Oct 2020
Emformer: Efficient Memory Transformer Based Acoustic Model For Low
  Latency Streaming Speech Recognition
Emformer: Efficient Memory Transformer Based Acoustic Model For Low Latency Streaming Speech Recognition
Yangyang Shi
Yongqiang Wang
Chunyang Wu
Ching-Feng Yeh
Julian Chan
Frank Zhang
Duc Le
M. Seltzer
102
171
0
21 Oct 2020
Pushing the Limits of Semi-Supervised Learning for Automatic Speech
  Recognition
Pushing the Limits of Semi-Supervised Learning for Automatic Speech Recognition
Yu Zhang
James Qin
Daniel S. Park
Wei Han
Chung-Cheng Chiu
Ruoming Pang
Quoc V. Le
Yonghui Wu
VLM
SSL
179
309
0
20 Oct 2020
wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
  Representations
wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations
Alexei Baevski
Henry Zhou
Abdel-rahman Mohamed
Michael Auli
SSL
245
5,774
0
20 Jun 2020
Self-Supervised Learning of Pretext-Invariant Representations
Self-Supervised Learning of Pretext-Invariant Representations
Ishan Misra
Laurens van der Maaten
SSL
VLM
95
1,453
0
04 Dec 2019
End-to-end ASR: from Supervised to Semi-Supervised Learning with Modern
  Architectures
End-to-end ASR: from Supervised to Semi-Supervised Learning with Modern Architectures
Gabriel Synnaeve
Qiantong Xu
Jacob Kahn
Tatiana Likhomanenko
Edouard Grave
Vineel Pratap
Anuroop Sriram
Vitaliy Liptchinsky
R. Collobert
SSL
AI4TS
107
247
0
19 Nov 2019
TinyBERT: Distilling BERT for Natural Language Understanding
TinyBERT: Distilling BERT for Natural Language Understanding
Xiaoqi Jiao
Yichun Yin
Lifeng Shang
Xin Jiang
Xiao Chen
Linlin Li
F. Wang
Qun Liu
VLM
94
1,857
0
23 Sep 2019
Self-Training for End-to-End Speech Recognition
Self-Training for End-to-End Speech Recognition
Jacob Kahn
Ann Lee
Awni Y. Hannun
SSL
58
235
0
19 Sep 2019
Guiding CTC Posterior Spike Timings for Improved Posterior Fusion and
  Knowledge Distillation
Guiding CTC Posterior Spike Timings for Improved Posterior Fusion and Knowledge Distillation
Gakuto Kurata
Kartik Audhkhasi
37
48
0
17 Apr 2019
Self-Attention Aligner: A Latency-Control End-to-End Model for ASR Using
  Self-Attention Network and Chunk-Hopping
Self-Attention Aligner: A Latency-Control End-to-End Model for ASR Using Self-Attention Network and Chunk-Hopping
Linhao Dong
Feng Wang
Bo Xu
47
91
0
18 Feb 2019
Deep contextualized word representations
Deep contextualized word representations
Matthew E. Peters
Mark Neumann
Mohit Iyyer
Matt Gardner
Christopher Clark
Kenton Lee
Luke Zettlemoyer
NAI
200
11,542
0
15 Feb 2018
FitNets: Hints for Thin Deep Nets
FitNets: Hints for Thin Deep Nets
Adriana Romero
Nicolas Ballas
Samira Ebrahimi Kahou
Antoine Chassang
C. Gatta
Yoshua Bengio
FedML
290
3,882
0
19 Dec 2014
1