ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.04073
  4. Cited By
Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient
  for Convolutional Neural Networks

Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks

7 June 2023
Mohammed Nowaz Rabbani Chowdhury
Shuai Zhang
Ming Wang
Sijia Liu
Pin-Yu Chen
    MoE
ArXiv (abs)PDFHTML

Papers citing "Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks"

5 / 5 papers shown
Title
Learning Soft Sparse Shapes for Efficient Time-Series Classification
Learning Soft Sparse Shapes for Efficient Time-Series Classification
Zhen Liu
Yicheng Luo
Yangqiu Song
Emadeldeen Eldele
Min-man Wu
Qianli Ma
AI4TS
130
0
0
11 May 2025
Backdoor Attacks Against Patch-based Mixture of Experts
Backdoor Attacks Against Patch-based Mixture of Experts
Cedric Chan
Jona te Lintelo
S. Picek
AAMLMoE
439
0
0
03 May 2025
A Comprehensive Survey of Mixture-of-Experts: Algorithms, Theory, and Applications
A Comprehensive Survey of Mixture-of-Experts: Algorithms, Theory, and Applications
Siyuan Mu
Sen Lin
MoE
491
5
0
10 Mar 2025
Filtered not Mixed: Stochastic Filtering-Based Online Gating for Mixture of Large Language Models
Filtered not Mixed: Stochastic Filtering-Based Online Gating for Mixture of Large Language Models
Raeid Saqur
Anastasis Kratsios
Florian Krach
Yannick Limmer
Jacob-Junqi Tian
John Willes
Blanka Horvath
Frank Rudzicz
MoE
150
0
0
24 Feb 2025
LocMoE: A Low-Overhead MoE for Large Language Model Training
LocMoE: A Low-Overhead MoE for Large Language Model Training
Jing Li
Zhijie Sun
Xuan He
Li Zeng
Yi Lin
Entong Li
Binfan Zheng
Rongqian Zhao
Xin Chen
MoE
140
13
0
25 Jan 2024
1