ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.08164
  4. Cited By
Decoupled Greedy Learning of CNNs

Decoupled Greedy Learning of CNNs

23 January 2019
Eugene Belilovsky
Michael Eickenberg
Edouard Oyallon
ArXivPDFHTML

Papers citing "Decoupled Greedy Learning of CNNs"

22 / 72 papers shown
Title
Gradient Forward-Propagation for Large-Scale Temporal Video Modelling
Gradient Forward-Propagation for Large-Scale Temporal Video Modelling
Mateusz Malinowski
Dimitrios Vytiniotis
G. Swirszcz
Viorica Patraucean
João Carreira
30
8
0
15 Jun 2021
Decoupled Greedy Learning of CNNs for Synchronous and Asynchronous
  Distributed Learning
Decoupled Greedy Learning of CNNs for Synchronous and Asynchronous Distributed Learning
Eugene Belilovsky
Louis Leconte
Lucas Caccia
Michael Eickenberg
Edouard Oyallon
24
7
0
11 Jun 2021
Revisiting Locally Supervised Learning: an Alternative to End-to-end
  Training
Revisiting Locally Supervised Learning: an Alternative to End-to-end Training
Yulin Wang
Zanlin Ni
Shiji Song
Le Yang
Gao Huang
25
83
0
26 Jan 2021
Training Deep Architectures Without End-to-End Backpropagation: A Survey
  on the Provably Optimal Methods
Training Deep Architectures Without End-to-End Backpropagation: A Survey on the Provably Optimal Methods
Shiyu Duan
José C. Príncipe
MQ
43
3
0
09 Jan 2021
Parallel Training of Deep Networks with Local Updates
Parallel Training of Deep Networks with Local Updates
Michael Laskin
Luke Metz
Seth Nabarrao
Mark Saroufim
Badreddine Noune
Carlo Luschi
Jascha Narain Sohl-Dickstein
Pieter Abbeel
FedML
32
26
0
07 Dec 2020
Accumulated Decoupled Learning: Mitigating Gradient Staleness in
  Inter-Layer Model Parallelization
Accumulated Decoupled Learning: Mitigating Gradient Staleness in Inter-Layer Model Parallelization
Huiping Zhuang
Zhiping Lin
Kar-Ann Toh
42
4
0
03 Dec 2020
Interlocking Backpropagation: Improving depthwise model-parallelism
Interlocking Backpropagation: Improving depthwise model-parallelism
Aidan Gomez
Oscar Key
Kuba Perlin
Stephen Gou
Nick Frosst
J. Dean
Y. Gal
24
19
0
08 Oct 2020
Interferometric Graph Transform: a Deep Unsupervised Graph
  Representation
Interferometric Graph Transform: a Deep Unsupervised Graph Representation
Edouard Oyallon
21
6
0
10 Jun 2020
Why should we add early exits to neural networks?
Why should we add early exits to neural networks?
Simone Scardapane
M. Scarpiniti
E. Baccarelli
A. Uncini
16
117
0
27 Apr 2020
Pipelined Backpropagation at Scale: Training Large Models without
  Batches
Pipelined Backpropagation at Scale: Training Large Models without Batches
Atli Kosson
Vitaliy Chiley
Abhinav Venigalla
Joel Hestness
Urs Koster
35
33
0
25 Mar 2020
Contrastive Similarity Matching for Supervised Learning
Contrastive Similarity Matching for Supervised Learning
Shanshan Qin
N. Mudur
Cengiz Pehlevan
SSL
DRL
24
1
0
24 Feb 2020
Identifying Critical Neurons in ANN Architectures using Mixed Integer
  Programming
Identifying Critical Neurons in ANN Architectures using Mixed Integer Programming
M. Elaraby
Guy Wolf
Margarida Carvalho
26
5
0
17 Feb 2020
Large-Scale Gradient-Free Deep Learning with Recursive Local
  Representation Alignment
Large-Scale Gradient-Free Deep Learning with Recursive Local Representation Alignment
Alexander Ororbia
A. Mali
Daniel Kifer
C. Lee Giles
23
2
0
10 Feb 2020
Sideways: Depth-Parallel Training of Video Models
Sideways: Depth-Parallel Training of Video Models
Mateusz Malinowski
G. Swirszcz
João Carreira
Viorica Patraucean
MDE
49
13
0
17 Jan 2020
Online Learned Continual Compression with Adaptive Quantization Modules
Online Learned Continual Compression with Adaptive Quantization Modules
Lucas Caccia
Eugene Belilovsky
Massimo Caccia
Joelle Pineau
30
5
0
19 Nov 2019
Learning Boolean Circuits with Neural Networks
Learning Boolean Circuits with Neural Networks
Eran Malach
Shai Shalev-Shwartz
17
4
0
25 Oct 2019
Gated Linear Networks
Gated Linear Networks
William H. Guss
Tor Lattimore
David Budden
Avishkar Bhoopchand
Christopher Mattern
...
Ruslan Salakhutdinov
Jianan Wang
Peter Toth
Simon Schmitt
Marcus Hutter
AI4CE
18
40
0
30 Sep 2019
On the Acceleration of Deep Learning Model Parallelism with Staleness
On the Acceleration of Deep Learning Model Parallelism with Staleness
An Xu
Zhouyuan Huo
Heng-Chiao Huang
24
37
0
05 Sep 2019
Fully Decoupled Neural Network Learning Using Delayed Gradients
Fully Decoupled Neural Network Learning Using Delayed Gradients
Huiping Zhuang
Yi Wang
Qinglai Liu
Shuai Zhang
Zhiping Lin
FedML
25
30
0
21 Jun 2019
Associated Learning: Decomposing End-to-end Backpropagation based on
  Auto-encoders and Target Propagation
Associated Learning: Decomposing End-to-end Backpropagation based on Auto-encoders and Target Propagation
Yu-Wei Kao
Hung-Hsuan Chen
BDL
20
5
0
13 Jun 2019
Improving Discrete Latent Representations With Differentiable
  Approximation Bridges
Improving Discrete Latent Representations With Differentiable Approximation Bridges
Jason Ramapuram
Russ Webb
DRL
19
9
0
09 May 2019
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
312
2,896
0
15 Sep 2016
Previous
12