ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.09310
  4. Cited By
Distilling Knowledge from Ensembles of Acoustic Models for Joint
  CTC-Attention End-to-End Speech Recognition
v1v2v3 (latest)

Distilling Knowledge from Ensembles of Acoustic Models for Joint CTC-Attention End-to-End Speech Recognition

19 May 2020
Yan Gao
Titouan Parcollet
Nicholas D. Lane
    VLM
ArXiv (abs)PDFHTML

Papers citing "Distilling Knowledge from Ensembles of Acoustic Models for Joint CTC-Attention End-to-End Speech Recognition"

7 / 7 papers shown
Noisy Node Classification by Bi-level Optimization based Multi-teacher
  Distillation
Noisy Node Classification by Bi-level Optimization based Multi-teacher Distillation
Yujing Liu
Zongqian Wu
Zhengyu Lu
Ci Nie
Guoqiu Wen
Ping Hu
Xiaofeng Zhu
245
2
0
27 Apr 2024
Learning When to Trust Which Teacher for Weakly Supervised ASR
Learning When to Trust Which Teacher for Weakly Supervised ASRInterspeech (Interspeech), 2023
Aakriti Agrawal
Milind Rao
Anit Kumar Sahu
Gopinath Chennupati
A. Stolcke
154
0
0
21 Jun 2023
Towards domain generalisation in ASR with elitist sampling and ensemble
  knowledge distillation
Towards domain generalisation in ASR with elitist sampling and ensemble knowledge distillationIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023
Rehan Ahmad
Md. Asif Jalal
Muhammad Umar Farooq
A. Ollerenshaw
Thomas Hain
122
2
0
01 Mar 2023
Ensemble knowledge distillation of self-supervised speech models
Ensemble knowledge distillation of self-supervised speech modelsIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023
Kuan-Po Huang
Tzu-hsun Feng
Yu-Kuan Fu
Tsung-Yuan Hsu
Po-Chieh Yen
Wei-Cheng Tseng
Kai-Wei Chang
Hung-yi Lee
292
21
0
24 Feb 2023
An Experimental Study on Private Aggregation of Teacher Ensemble
  Learning for End-to-End Speech Recognition
An Experimental Study on Private Aggregation of Teacher Ensemble Learning for End-to-End Speech RecognitionSpoken Language Technology Workshop (SLT), 2022
Chao-Han Huck Yang
I-Fan Chen
A. Stolcke
Sabato Marco Siniscalchi
Chin-Hui Lee
182
3
0
11 Oct 2022
Shrinking Bigfoot: Reducing wav2vec 2.0 footprint
Shrinking Bigfoot: Reducing wav2vec 2.0 footprint
Zilun Peng
Akshay Budhkar
Ilana Tuil
J. Levy
Parinaz Sobhani
Raphael Cohen
J. Nassour
186
35
0
29 Mar 2021
Knowledge Distillation in Iterative Generative Models for Improved
  Sampling Speed
Knowledge Distillation in Iterative Generative Models for Improved Sampling Speed
Eric Luhman
Troy Luhman
DiffM
497
352
0
07 Jan 2021
1
Page 1 of 1