Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.08155
Cited By
QuantMoE-Bench: Examining Post-Training Quantization for Mixture-of-Experts
12 June 2024
Pingzhi Li
Xiaolong Jin
Yu Cheng
Tianlong Chen
Tianlong Chen
MQ
MoE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"QuantMoE-Bench: Examining Post-Training Quantization for Mixture-of-Experts"
5 / 5 papers shown
Title
MoQa: Rethinking MoE Quantization with Multi-stage Data-model Distribution Awareness
Zihao Zheng
Xiuping Cui
Size Zheng
Maoliang Li
Jiayu Chen
Yun Liang
Xiang Chen
MQ
MoE
69
0
0
27 Mar 2025
Foot-In-The-Door: A Multi-turn Jailbreak for LLMs
Zixuan Weng
Xiaolong Jin
Jinyuan Jia
Xinsong Zhang
AAML
169
0
0
27 Feb 2025
Scaling Laws for Fine-Grained Mixture of Experts
Jakub Krajewski
Jan Ludziejewski
Kamil Adamczewski
Maciej Pióro
Michal Krutul
...
Krystian Król
Tomasz Odrzygó'zd'z
Piotr Sankowski
Marek Cygan
Sebastian Jaszczur
MoE
51
54
0
12 Feb 2024
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
264
4,489
0
23 Jan 2020
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
245
1,826
0
17 Sep 2019
1