Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.08021
Cited By
SparCML: High-Performance Sparse Communication for Machine Learning
22 February 2018
Cédric Renggli
Saleh Ashkboos
Mehdi Aghagolzadeh
Dan Alistarh
Torsten Hoefler
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SparCML: High-Performance Sparse Communication for Machine Learning"
19 / 19 papers shown
Title
Compressed and Sparse Models for Non-Convex Decentralized Learning
Andrew Campbell
Hang Liu
Leah Woldemariam
Anna Scaglione
20
0
0
09 Nov 2023
STen: Productive and Efficient Sparsity in PyTorch
Andrei Ivanov
Nikoli Dryden
Tal Ben-Nun
Saleh Ashkboos
Torsten Hoefler
34
4
0
15 Apr 2023
A Theory of I/O-Efficient Sparse Neural Network Inference
Niels Gleinig
Tal Ben-Nun
Torsten Hoefler
25
0
0
03 Jan 2023
L-GreCo: Layerwise-Adaptive Gradient Compression for Efficient and Accurate Deep Learning
Mohammadreza Alimohammadi
I. Markov
Elias Frantar
Dan Alistarh
30
5
0
31 Oct 2022
HammingMesh: A Network Topology for Large-Scale Deep Learning
Torsten Hoefler
Tommaso Bonato
Daniele De Sensi
Salvatore Di Girolamo
Shigang Li
Marco Heddes
Jon Belk
Deepak Goel
Miguel Castro
Steve Scott
3DH
GNN
AI4CE
26
20
0
03 Sep 2022
Parallel Successive Learning for Dynamic Distributed Model Training over Heterogeneous Wireless Networks
Seyyedali Hosseinalipour
Su Wang
Nicolò Michelusi
Vaneet Aggarwal
Christopher G. Brinton
David J. Love
M. Chiang
20
27
0
07 Feb 2022
Chimera: Efficiently Training Large-Scale Neural Networks with Bidirectional Pipelines
Shigang Li
Torsten Hoefler
GNN
AI4CE
LRM
77
131
0
14 Jul 2021
Flare: Flexible In-Network Allreduce
Daniele De Sensi
Salvatore Di Girolamo
Saleh Ashkboos
Shigang Li
Torsten Hoefler
30
40
0
29 Jun 2021
An Oracle for Guiding Large-Scale Model/Hybrid Parallel Training of Convolutional Neural Networks
A. Kahira
Truong Thao Nguyen
L. Bautista-Gomez
Ryousei Takano
Rosa M. Badia
M. Wahib
15
9
0
19 Apr 2021
EventGraD: Event-Triggered Communication in Parallel Machine Learning
Soumyadip Ghosh
B. Aquino
V. Gupta
FedML
21
8
0
12 Mar 2021
Sparse Communication for Training Deep Networks
Negar Foroutan
Martin Jaggi
FedML
22
16
0
19 Sep 2020
Reducing Communication in Graph Neural Network Training
Alok Tripathy
Katherine Yelick
A. Buluç
GNN
22
104
0
07 May 2020
Communication optimization strategies for distributed deep neural network training: A survey
Shuo Ouyang
Dezun Dong
Yemao Xu
Liquan Xiao
27
12
0
06 Mar 2020
Communication-Efficient Decentralized Learning with Sparsification and Adaptive Peer Selection
Zhenheng Tang
S. Shi
X. Chu
FedML
21
57
0
22 Feb 2020
Layer-wise Adaptive Gradient Sparsification for Distributed Deep Learning with Convergence Guarantees
S. Shi
Zhenheng Tang
Qiang-qiang Wang
Kaiyong Zhao
X. Chu
11
22
0
20 Nov 2019
HAWQ-V2: Hessian Aware trace-Weighted Quantization of Neural Networks
Zhen Dong
Z. Yao
Yaohui Cai
Daiyaan Arfeen
A. Gholami
Michael W. Mahoney
Kurt Keutzer
MQ
26
274
0
10 Nov 2019
Federated Learning over Wireless Fading Channels
M. Amiri
Deniz Gunduz
27
505
0
23 Jul 2019
A Distributed Synchronous SGD Algorithm with Global Top-
k
k
k
Sparsification for Low Bandwidth Networks
S. Shi
Qiang-qiang Wang
Kaiyong Zhao
Zhenheng Tang
Yuxin Wang
Xiang Huang
Xiaowen Chu
32
134
0
14 Jan 2019
Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis
Tal Ben-Nun
Torsten Hoefler
GNN
30
701
0
26 Feb 2018
1