ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.01947
  4. Cited By
TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression
  For On-device ASR Models

TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models

5 September 2023
Shangguan Yuan
Haichuan Yang
Danni Li
Chunyang Wu
Yassir Fathullah
Dilin Wang
Ayushi Dalmia
Raghuraman Krishnamoorthi
Ozlem Kalinli
J. Jia
Jay Mahadeokar
Xin Lei
Michael Seltzer
Vikas Chandra
ArXivPDFHTML

Papers citing "TODM: Train Once Deploy Many Efficient Supernet-Based RNN-T Compression For On-device ASR Models"

3 / 3 papers shown
Title
Dynamic Encoder Size Based on Data-Driven Layer-wise Pruning for Speech
  Recognition
Dynamic Encoder Size Based on Data-Driven Layer-wise Pruning for Speech Recognition
Jingjing Xu
Wei Zhou
Zijian Yang
Eugen Beck
Ralf Schlueter
34
1
0
10 Jul 2024
Omni-sparsity DNN: Fast Sparsity Optimization for On-Device Streaming
  E2E ASR via Supernet
Omni-sparsity DNN: Fast Sparsity Optimization for On-Device Streaming E2E ASR via Supernet
Haichuan Yang
Yuan Shangguan
Dilin Wang
Meng Li
P. Chuang
Xiaohui Zhang
Ganesh Venkatesh
Ozlem Kalinli
Vikas Chandra
32
14
0
15 Oct 2021
AlphaNet: Improved Training of Supernets with Alpha-Divergence
AlphaNet: Improved Training of Supernets with Alpha-Divergence
Dilin Wang
Chengyue Gong
Meng Li
Qiang Liu
Vikas Chandra
152
44
0
16 Feb 2021
1