Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1804.10574
Cited By
Decoupled Parallel Backpropagation with Convergence Guarantee
27 April 2018
Zhouyuan Huo
Bin Gu
Qian Yang
Heng-Chiao Huang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Decoupled Parallel Backpropagation with Convergence Guarantee"
21 / 21 papers shown
Title
Asynchronous Stochastic Gradient Descent with Decoupled Backpropagation and Layer-Wise Updates
Cabrel Teguemne Fokam
Khaleelulla Khan Nazeer
Lukas König
David Kappel
Anand Subramoney
35
0
0
08 Oct 2024
HPFF: Hierarchical Locally Supervised Learning with Patch Feature Fusion
Junhao Su
Chenghao He
Feiyu Zhu
Xiaojie Xu
Dongzhi Guan
Chenyang Si
56
2
0
08 Jul 2024
PETRA: Parallel End-to-end Training with Reversible Architectures
Stéphane Rivaud
Louis Fournier
Thomas Pumir
Eugene Belilovsky
Michael Eickenberg
Edouard Oyallon
25
0
0
04 Jun 2024
Forward Direct Feedback Alignment for Online Gradient Estimates of Spiking Neural Networks
Florian Bacho
Dminique Chu
25
0
0
06 Feb 2024
Go beyond End-to-End Training: Boosting Greedy Local Learning with Context Supply
Chengting Yu
Fengzhao Zhang
Hanzhi Ma
Aili Wang
Er-ping Li
29
1
0
12 Dec 2023
A Survey From Distributed Machine Learning to Distributed Deep Learning
Mohammad Dehghani
Zahra Yazdanparast
26
0
0
11 Jul 2023
On Efficient Training of Large-Scale Deep Learning Models: A Literature Review
Li Shen
Yan Sun
Zhiyuan Yu
Liang Ding
Xinmei Tian
Dacheng Tao
VLM
30
41
0
07 Apr 2023
Deep Incubation: Training Large Models by Divide-and-Conquering
Zanlin Ni
Yulin Wang
Jiangwei Yu
Haojun Jiang
Yu Cao
Gao Huang
VLM
18
11
0
08 Dec 2022
Dataloader Parameter Tuner: An Automated Dataloader Parameter Tuner for Deep Learning Models
Jooyoung Park
DoangJoo Synn
XinYu Piao
Jong-Kook Kim
16
0
0
11 Oct 2022
Block-wise Training of Residual Networks via the Minimizing Movement Scheme
Skander Karkar
Ibrahim Ayed
Emmanuel de Bézenac
Patrick Gallinari
33
1
0
03 Oct 2022
Layer-Wise Partitioning and Merging for Efficient and Scalable Deep Learning
S. Akintoye
Liangxiu Han
H. Lloyd
Xin Zhang
Darren Dancey
Haoming Chen
Daoqiang Zhang
FedML
34
5
0
22 Jul 2022
Efficient Attribute Unlearning: Towards Selective Removal of Input Attributes from Feature Representations
Tao Guo
Song Guo
Jiewei Zhang
Wenchao Xu
Junxiao Wang
MU
27
17
0
27 Feb 2022
Harmony: Overcoming the Hurdles of GPU Memory Capacity to Train Massive DNN Models on Commodity Servers
Youjie Li
Amar Phanishayee
D. Murray
Jakub Tarnawski
N. Kim
19
19
0
02 Feb 2022
Privacy-Preserving Asynchronous Federated Learning Algorithms for Multi-Party Vertically Collaborative Learning
Bin Gu
An Xu
Zhouyuan Huo
Cheng Deng
Heng-Chiao Huang
FedML
38
27
0
14 Aug 2020
DAPPLE: A Pipelined Data Parallel Approach for Training Large Models
Shiqing Fan
Yi Rong
Chen Meng
Zongyan Cao
Siyu Wang
...
Jun Yang
Lixue Xia
Lansong Diao
Xiaoyong Liu
Wei Lin
21
232
0
02 Jul 2020
Pipelined Backpropagation at Scale: Training Large Models without Batches
Atli Kosson
Vitaliy Chiley
Abhinav Venigalla
Joel Hestness
Urs Koster
35
33
0
25 Mar 2020
Pipelined Training with Stale Weights of Deep Convolutional Neural Networks
Lifu Zhang
T. Abdelrahman
21
0
0
29 Dec 2019
Fully Decoupled Neural Network Learning Using Delayed Gradients
Huiping Zhuang
Yi Wang
Qinglai Liu
Shuai Zhang
Zhiping Lin
FedML
16
30
0
21 Jun 2019
Associated Learning: Decomposing End-to-end Backpropagation based on Auto-encoders and Target Propagation
Yu-Wei Kao
Hung-Hsuan Chen
BDL
20
5
0
13 Jun 2019
Decoupled Greedy Learning of CNNs
Eugene Belilovsky
Michael Eickenberg
Edouard Oyallon
6
114
0
23 Jan 2019
Benefits of depth in neural networks
Matus Telgarsky
153
602
0
14 Feb 2016
1