Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2110.09245
Cited By
Efficient Sequence Training of Attention Models using Approximative Recombination
18 October 2021
Nils-Philipp Wynands
Wilfried Michel
Jan Rosendahl
Ralf Schluter
Hermann Ney
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Efficient Sequence Training of Attention Models using Approximative Recombination"
8 / 8 papers shown
Title
Investigating Methods to Improve Language Model Integration for Attention-based Encoder-Decoder ASR Models
Mohammad Zeineldeen
Aleksandr Glushko
Wilfried Michel
Albert Zeyer
Ralf Schluter
Hermann Ney
AuLLM
34
39
0
12 Apr 2021
On Minimum Word Error Rate Training of the Hybrid Autoregressive Transducer
Liang Lu
Zhong Meng
Naoyuki Kanda
Jinyu Li
Jiawei Liu
47
12
0
23 Oct 2020
wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations
Alexei Baevski
Henry Zhou
Abdel-rahman Mohamed
Michael Auli
SSL
179
5,734
0
20 Jun 2020
Early Stage LM Integration Using Local and Global Log-Linear Combination
Wilfried Michel
Ralf Schluter
Hermann Ney
29
11
0
20 May 2020
Minimum Bayes Risk Training of RNN-Transducer for End-to-End Speech Recognition
Chao Weng
Chengzhu Yu
Jia Cui
Chunlei Zhang
Dong Yu
105
39
0
28 Nov 2019
SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition
Daniel S. Park
William Chan
Yu Zhang
Chung-Cheng Chiu
Barret Zoph
E. D. Cubuk
Quoc V. Le
VLM
149
3,435
0
18 Apr 2019
Minimum Word Error Rate Training for Attention-based Sequence-to-Sequence Models
Rohit Prabhavalkar
Tara N. Sainath
Yonghui Wu
Patrick Nguyen
Zhiwen Chen
Chung-Cheng Chiu
Anjuli Kannan
40
161
0
05 Dec 2017
On Using Monolingual Corpora in Neural Machine Translation
Çağlar Gülçehre
Orhan Firat
Kelvin Xu
Kyunghyun Cho
Loïc Barrault
Huei-Chi Lin
Fethi Bougares
Holger Schwenk
Yoshua Bengio
96
559
0
11 Mar 2015
1