Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.19706
Cited By
SAML: Speaker Adaptive Mixture of LoRA Experts for End-to-End ASR
28 June 2024
Qiuming Zhao
Guangzhi Sun
Chao Zhang
Mingxing Xu
Thomas Fang Zheng
MoE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SAML: Speaker Adaptive Mixture of LoRA Experts for End-to-End ASR"
4 / 4 papers shown
Title
From Sparse to Soft Mixtures of Experts
J. Puigcerver
C. Riquelme
Basil Mustafa
N. Houlsby
MoE
121
114
0
02 Aug 2023
Adapter-Based Extension of Multi-Speaker Text-to-Speech Model for New Speakers
Cheng-Ping Hsieh
Subhankar Ghosh
Boris Ginsburg
43
18
0
01 Nov 2022
Scalable and Efficient MoE Training for Multitask Multilingual Models
Young Jin Kim
A. A. Awan
Alexandre Muzio
Andres Felipe Cruz Salinas
Liyang Lu
Amr Hendy
Samyam Rajbhandari
Yuxiong He
Hany Awadalla
MoE
104
84
0
22 Sep 2021
Aphasic Speech Recognition using a Mixture of Speech Intelligibility Experts
M. Perez
Zakaria Aldeneh
E. Provost
MoE
45
18
0
25 Aug 2020
1