ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1808.02621
  4. Cited By
Parallax: Sparsity-aware Data Parallel Training of Deep Neural Networks

Parallax: Sparsity-aware Data Parallel Training of Deep Neural Networks

8 August 2018
Soojeong Kim
Gyeong-In Yu
Hojin Park
Sungwoo Cho
Eunji Jeong
Hyeonmin Ha
Sanha Lee
Joo Seong Jeong
Byung-Gon Chun
ArXivPDFHTML

Papers citing "Parallax: Sparsity-aware Data Parallel Training of Deep Neural Networks"

10 / 10 papers shown
Title
Kimad: Adaptive Gradient Compression with Bandwidth Awareness
Kimad: Adaptive Gradient Compression with Bandwidth Awareness
Jihao Xin
Ivan Ilin
Shunkang Zhang
Marco Canini
Peter Richtárik
37
3
0
13 Dec 2023
A Survey From Distributed Machine Learning to Distributed Deep Learning
A Survey From Distributed Machine Learning to Distributed Deep Learning
Mohammad Dehghani
Zahra Yazdanparast
23
0
0
11 Jul 2023
Expediting Distributed DNN Training with Device Topology-Aware Graph
  Deployment
Expediting Distributed DNN Training with Device Topology-Aware Graph Deployment
Shiwei Zhang
Xiaodong Yi
Lansong Diao
Chuan Wu
Siyu Wang
W. Lin
GNN
22
5
0
13 Feb 2023
Optimizing DNN Compilation for Distributed Training with Joint OP and
  Tensor Fusion
Optimizing DNN Compilation for Distributed Training with Joint OP and Tensor Fusion
Xiaodong Yi
Shiwei Zhang
Lansong Diao
Chuan Wu
Zhen Zheng
Shiqing Fan
Siyu Wang
Jun Yang
W. Lin
39
4
0
26 Sep 2022
PICASSO: Unleashing the Potential of GPU-centric Training for
  Wide-and-deep Recommender Systems
PICASSO: Unleashing the Potential of GPU-centric Training for Wide-and-deep Recommender Systems
Yuanxing Zhang
Langshi Chen
Siran Yang
Man Yuan
Hui-juan Yi
...
Yong Li
Dingyang Zhang
Wei Lin
Lin Qu
Bo Zheng
35
32
0
11 Apr 2022
HeterPS: Distributed Deep Learning With Reinforcement Learning Based
  Scheduling in Heterogeneous Environments
HeterPS: Distributed Deep Learning With Reinforcement Learning Based Scheduling in Heterogeneous Environments
Ji Liu
Zhihua Wu
Dianhai Yu
Yanjun Ma
Danlei Feng
Minxu Zhang
Xinxuan Wu
Xuefeng Yao
Dejing Dou
18
44
0
20 Nov 2021
ScaleFreeCTR: MixCache-based Distributed Training System for CTR Models
  with Huge Embedding Table
ScaleFreeCTR: MixCache-based Distributed Training System for CTR Models with Huge Embedding Table
Huifeng Guo
Wei Guo
Yong Gao
Ruiming Tang
Xiuqiang He
Wenzhi Liu
32
20
0
17 Apr 2021
Workload-aware Automatic Parallelization for Multi-GPU DNN Training
Workload-aware Automatic Parallelization for Multi-GPU DNN Training
Sungho Shin
Y. Jo
Jungwook Choi
Swagath Venkataramani
Vijayalakshmi Srinivasan
Wonyong Sung
3DH
16
1
0
05 Nov 2018
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
337
1,049
0
10 Feb 2017
Google's Neural Machine Translation System: Bridging the Gap between
  Human and Machine Translation
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,746
0
26 Sep 2016
1