ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.04908
  4. Cited By
Listen and Fill in the Missing Letters: Non-Autoregressive Transformer
  for Speech Recognition

Listen and Fill in the Missing Letters: Non-Autoregressive Transformer for Speech Recognition

10 November 2019
Nanxin Chen
Shinji Watanabe
Jesús Villalba
Najim Dehak
ArXivPDFHTML

Papers citing "Listen and Fill in the Missing Letters: Non-Autoregressive Transformer for Speech Recognition"

5 / 5 papers shown
Title
LegoNN: Building Modular Encoder-Decoder Models
LegoNN: Building Modular Encoder-Decoder Models
Siddharth Dalmia
Dmytro Okhonko
M. Lewis
Sergey Edunov
Shinji Watanabe
Florian Metze
Luke Zettlemoyer
Abdel-rahman Mohamed
AuLLM
MoE
29
14
0
07 Jun 2022
FastLR: Non-Autoregressive Lipreading Model with Integrate-and-Fire
FastLR: Non-Autoregressive Lipreading Model with Integrate-and-Fire
Jinglin Liu
Yi Ren
Zhou Zhao
Chen Zhang
Baoxing Huai
Jing Yuan
11
11
0
06 Aug 2020
Mask CTC: Non-Autoregressive End-to-End ASR with CTC and Mask Predict
Mask CTC: Non-Autoregressive End-to-End ASR with CTC and Mask Predict
Yosuke Higuchi
Shinji Watanabe
Nanxin Chen
Tetsuji Ogawa
Tetsunori Kobayashi
17
137
0
18 May 2020
Spike-Triggered Non-Autoregressive Transformer for End-to-End Speech
  Recognition
Spike-Triggered Non-Autoregressive Transformer for End-to-End Speech Recognition
Zhengkun Tian
Jiangyan Yi
J. Tao
Ye Bai
Shuai Zhang
Zhengqi Wen
16
54
0
16 May 2020
Imputer: Sequence Modelling via Imputation and Dynamic Programming
Imputer: Sequence Modelling via Imputation and Dynamic Programming
William Chan
Chitwan Saharia
Geoffrey E. Hinton
Mohammad Norouzi
Navdeep Jaitly
BDL
AI4TS
21
114
0
20 Feb 2020
1