ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.16991
  4. Cited By
An Effective Training Framework for Light-Weight Automatic Speech Recognition Models

An Effective Training Framework for Light-Weight Automatic Speech Recognition Models

22 May 2025
Abdul Hannan
Alessio Brutti
Shah Nawaz
Mubashir Noman
ArXivPDFHTML

Papers citing "An Effective Training Framework for Light-Weight Automatic Speech Recognition Models"

14 / 14 papers shown
Title
Dynamic Data Pruning for Automatic Speech Recognition
Dynamic Data Pruning for Automatic Speech Recognition
Q. Xiao
Pingchuan Ma
Adriana Fernandez-Lopez
Boqian Wu
Lu Yin
Stavros Petridis
Mykola Pechenizkiy
Maja Pantic
Decebal Constantin Mocanu
Shiwei Liu
50
3
0
26 Jun 2024
Rethinking Transformers Pre-training for Multi-Spectral Satellite
  Imagery
Rethinking Transformers Pre-training for Multi-Spectral Satellite Imagery
Mubashir Noman
Muzammal Naseer
Hisham Cholakkal
Rao Muhammad Anwar
Salman Khan
Fahad Shahbaz Khan
ViT
58
42
0
08 Mar 2024
Fine-tuning Strategies for Faster Inference using Speech Self-Supervised
  Models: A Comparative Study
Fine-tuning Strategies for Faster Inference using Speech Self-Supervised Models: A Comparative Study
Salah Zaiem
Robin Algayres
Titouan Parcollet
S. Essid
Mirco Ravanelli
85
15
0
12 Mar 2023
Bottleneck Low-rank Transformers for Low-resource Spoken Language
  Understanding
Bottleneck Low-rank Transformers for Low-resource Spoken Language Understanding
Pu Wang
Hugo Van hamme
VLM
61
5
0
28 Jun 2022
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
368
7,600
0
11 Nov 2021
Learned Token Pruning for Transformers
Learned Token Pruning for Transformers
Sehoon Kim
Sheng Shen
D. Thorsley
A. Gholami
Woosuk Kwon
Joseph Hassoun
Kurt Keutzer
29
152
0
02 Jul 2021
Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in
  Knowledge Distillation
Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation
Taehyeon Kim
Jaehoon Oh
Nakyil Kim
Sangwook Cho
Se-Young Yun
32
232
0
19 May 2021
Sparsification via Compressed Sensing for Automatic Speech Recognition
Sparsification via Compressed Sensing for Automatic Speech Recognition
Kai Zhen
Hieu Duy Nguyen
Feng-Ju Chang
Athanasios Mouchtaris
Ariya Rastrow
.
44
13
0
09 Feb 2021
Lightweight and Efficient End-to-End Speech Recognition Using Low-Rank
  Transformer
Lightweight and Efficient End-to-End Speech Recognition Using Low-Rank Transformer
Genta Indra Winata
Samuel Cahyawijaya
Zhaojiang Lin
Zihan Liu
Pascale Fung
31
74
0
30 Oct 2019
SpecAugment: A Simple Data Augmentation Method for Automatic Speech
  Recognition
SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition
Daniel S. Park
William Chan
Yu Zhang
Chung-Cheng Chiu
Barret Zoph
E. D. Cubuk
Quoc V. Le
VLM
147
3,435
0
18 Apr 2019
Lessons from Building Acoustic Models with a Million Hours of Speech
Lessons from Building Acoustic Models with a Million Hours of Speech
S. Parthasarathi
N. Strom
60
88
0
02 Apr 2019
SentencePiece: A simple and language independent subword tokenizer and
  detokenizer for Neural Text Processing
SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing
Taku Kudo
John Richardson
144
3,490
0
19 Aug 2018
TED-LIUM 3: twice as much data and corpus repartition for experiments on
  speaker adaptation
TED-LIUM 3: twice as much data and corpus repartition for experiments on speaker adaptation
François Hernandez
Vincent Nguyen
Sahar Ghannay
N. Tomashenko
Yannick Esteve
VLM
68
346
0
12 May 2018
Distilling the Knowledge in a Neural Network
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
236
19,523
0
09 Mar 2015
1