ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.01126
  4. Cited By
Investigating the potential of Sparse Mixtures-of-Experts for
  multi-domain neural machine translation

Investigating the potential of Sparse Mixtures-of-Experts for multi-domain neural machine translation

1 July 2024
Nadezhda Chirkova
Vassilina Nikoulina
Jean-Luc Meunier
Alexandre Berard
    MoE
ArXivPDFHTML

Papers citing "Investigating the potential of Sparse Mixtures-of-Experts for multi-domain neural machine translation"

4 / 4 papers shown
Title
Tutel: Adaptive Mixture-of-Experts at Scale
Tutel: Adaptive Mixture-of-Experts at Scale
Changho Hwang
Wei Cui
Yifan Xiong
Ziyue Yang
Ze Liu
...
Joe Chau
Peng Cheng
Fan Yang
Mao Yang
Y. Xiong
MoE
97
110
0
07 Jun 2022
Beyond Distillation: Task-level Mixture-of-Experts for Efficient
  Inference
Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference
Sneha Kudugunta
Yanping Huang
Ankur Bapna
M. Krikun
Dmitry Lepikhin
Minh-Thang Luong
Orhan Firat
MoE
119
106
0
24 Sep 2021
Scalable and Efficient MoE Training for Multitask Multilingual Models
Scalable and Efficient MoE Training for Multitask Multilingual Models
Young Jin Kim
A. A. Awan
Alexandre Muzio
Andres Felipe Cruz Salinas
Liyang Lu
Amr Hendy
Samyam Rajbhandari
Yuxiong He
Hany Awadalla
MoE
98
84
0
22 Sep 2021
Efficient Inference for Multilingual Neural Machine Translation
Efficient Inference for Multilingual Neural Machine Translation
Alexandre Berard
Dain Lee
S. Clinchant
K. Jung
Vassilina Nikoulina
39
12
0
14 Sep 2021
1