ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.19706
  4. Cited By
SAML: Speaker Adaptive Mixture of LoRA Experts for End-to-End ASR

SAML: Speaker Adaptive Mixture of LoRA Experts for End-to-End ASR

28 June 2024
Qiuming Zhao
Guangzhi Sun
Chao Zhang
Mingxing Xu
Thomas Fang Zheng
    MoE
ArXivPDFHTML

Papers citing "SAML: Speaker Adaptive Mixture of LoRA Experts for End-to-End ASR"

4 / 4 papers shown
Title
From Sparse to Soft Mixtures of Experts
From Sparse to Soft Mixtures of Experts
J. Puigcerver
C. Riquelme
Basil Mustafa
N. Houlsby
MoE
121
114
0
02 Aug 2023
Adapter-Based Extension of Multi-Speaker Text-to-Speech Model for New
  Speakers
Adapter-Based Extension of Multi-Speaker Text-to-Speech Model for New Speakers
Cheng-Ping Hsieh
Subhankar Ghosh
Boris Ginsburg
43
18
0
01 Nov 2022
Scalable and Efficient MoE Training for Multitask Multilingual Models
Scalable and Efficient MoE Training for Multitask Multilingual Models
Young Jin Kim
A. A. Awan
Alexandre Muzio
Andres Felipe Cruz Salinas
Liyang Lu
Amr Hendy
Samyam Rajbhandari
Yuxiong He
Hany Awadalla
MoE
104
84
0
22 Sep 2021
Aphasic Speech Recognition using a Mixture of Speech Intelligibility
  Experts
Aphasic Speech Recognition using a Mixture of Speech Intelligibility Experts
M. Perez
Zakaria Aldeneh
E. Provost
MoE
45
18
0
25 Aug 2020
1