ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.11446
  4. Cited By
Greedy Layerwise Learning Can Scale to ImageNet

Greedy Layerwise Learning Can Scale to ImageNet

29 December 2018
Eugene Belilovsky
Michael Eickenberg
Edouard Oyallon
ArXivPDFHTML

Papers citing "Greedy Layerwise Learning Can Scale to ImageNet"

39 / 39 papers shown
Title
HPFF: Hierarchical Locally Supervised Learning with Patch Feature Fusion
HPFF: Hierarchical Locally Supervised Learning with Patch Feature Fusion
Junhao Su
Chenghao He
Feiyu Zhu
Xiaojie Xu
Dongzhi Guan
Chenyang Si
58
2
0
08 Jul 2024
PETRA: Parallel End-to-end Training with Reversible Architectures
PETRA: Parallel End-to-end Training with Reversible Architectures
Stéphane Rivaud
Louis Fournier
Thomas Pumir
Eugene Belilovsky
Michael Eickenberg
Edouard Oyallon
25
0
0
04 Jun 2024
Forward Learning of Graph Neural Networks
Forward Learning of Graph Neural Networks
Namyong Park
Xing Wang
Antoine Simoulin
Shuai Yang
Grey Yang
Ryan Rossi
Puja Trivedi
Nesreen K. Ahmed
GNN
47
1
0
16 Mar 2024
Go beyond End-to-End Training: Boosting Greedy Local Learning with
  Context Supply
Go beyond End-to-End Training: Boosting Greedy Local Learning with Context Supply
Chengting Yu
Fengzhao Zhang
Hanzhi Ma
Aili Wang
Er-ping Li
32
1
0
12 Dec 2023
Can Forward Gradient Match Backpropagation?
Can Forward Gradient Match Backpropagation?
Louis Fournier
Stéphane Rivaud
Eugene Belilovsky
Michael Eickenberg
Edouard Oyallon
21
16
0
12 Jun 2023
Block-local learning with probabilistic latent representations
Block-local learning with probabilistic latent representations
David Kappel
Khaleelulla Khan Nazeer
Cabrel Teguemne Fokam
Christian Mayr
Anand Subramoney
29
4
0
24 May 2023
Local Learning with Neuron Groups
Local Learning with Neuron Groups
Adeetya Patel
Michael Eickenberg
Eugene Belilovsky
32
6
0
18 Jan 2023
Local Learning on Transformers via Feature Reconstruction
Local Learning on Transformers via Feature Reconstruction
P. Pathak
Jingwei Zhang
Dimitris Samaras
ViT
24
5
0
29 Dec 2022
Deep Incubation: Training Large Models by Divide-and-Conquering
Deep Incubation: Training Large Models by Divide-and-Conquering
Zanlin Ni
Yulin Wang
Jiangwei Yu
Haojun Jiang
Yu Cao
Gao Huang
VLM
20
11
0
08 Dec 2022
Scaling Forward Gradient With Local Losses
Scaling Forward Gradient With Local Losses
Mengye Ren
Simon Kornblith
Renjie Liao
Geoffrey E. Hinton
81
49
0
07 Oct 2022
Block-wise Training of Residual Networks via the Minimizing Movement
  Scheme
Block-wise Training of Residual Networks via the Minimizing Movement Scheme
Skander Karkar
Ibrahim Ayed
Emmanuel de Bézenac
Patrick Gallinari
33
1
0
03 Oct 2022
Seeking Interpretability and Explainability in Binary Activated Neural
  Networks
Seeking Interpretability and Explainability in Binary Activated Neural Networks
Benjamin Leblanc
Pascal Germain
FAtt
42
1
0
07 Sep 2022
Hidden Progress in Deep Learning: SGD Learns Parities Near the
  Computational Limit
Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit
Boaz Barak
Benjamin L. Edelman
Surbhi Goel
Sham Kakade
Eran Malach
Cyril Zhang
41
124
0
18 Jul 2022
Gigapixel Whole-Slide Images Classification using Locally Supervised
  Learning
Gigapixel Whole-Slide Images Classification using Locally Supervised Learning
Jingwei Zhang
Xin Zhang
Ke Ma
Rajarsi R. Gupta
Joel H. Saltz
Maria Vakalopoulou
Dimitris Samaras
24
24
0
17 Jul 2022
Combinatorial optimization for low bit-width neural networks
Combinatorial optimization for low bit-width neural networks
Hanxu Zhou
Aida Ashrafi
Matthew B. Blaschko
MQ
24
0
0
04 Jun 2022
Dual Convexified Convolutional Neural Networks
Dual Convexified Convolutional Neural Networks
Site Bai
Chuyang Ke
Jean Honorio
24
1
0
27 May 2022
Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions
Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions
Aaron Mishkin
Arda Sahiner
Mert Pilanci
OffRL
77
30
0
02 Feb 2022
Deep Layer-wise Networks Have Closed-Form Weights
Chieh-Tsai Wu
A. Masoomi
Arthur Gretton
Jennifer Dy
31
3
0
01 Feb 2022
PhotoWCT$^2$: Compact Autoencoder for Photorealistic Style Transfer
  Resulting from Blockwise Training and Skip Connections of High-Frequency
  Residuals
PhotoWCT2^22: Compact Autoencoder for Photorealistic Style Transfer Resulting from Blockwise Training and Skip Connections of High-Frequency Residuals
Tai-Yin Chiu
Danna Gurari
21
34
0
22 Oct 2021
Path Regularization: A Convexity and Sparsity Inducing Regularization
  for Parallel ReLU Networks
Path Regularization: A Convexity and Sparsity Inducing Regularization for Parallel ReLU Networks
Tolga Ergen
Mert Pilanci
34
16
0
18 Oct 2021
Autonomous Deep Quality Monitoring in Streaming Environments
Autonomous Deep Quality Monitoring in Streaming Environments
Andri Ashfahani
Mahardhika Pratama
E. Lughofer
E. Yapp
36
4
0
26 Jun 2021
Progressive Stage-wise Learning for Unsupervised Feature Representation
  Enhancement
Progressive Stage-wise Learning for Unsupervised Feature Representation Enhancement
Zefan Li
Chenxi Liu
Alan Yuille
Bingbing Ni
Wenjun Zhang
Wen Gao
SSL
16
5
0
10 Jun 2021
Greedy Hierarchical Variational Autoencoders for Large-Scale Video
  Prediction
Greedy Hierarchical Variational Autoencoders for Large-Scale Video Prediction
Bohan Wu
Suraj Nair
Roberto Martin-Martin
Li Fei-Fei
Chelsea Finn
DRL
27
99
0
06 Mar 2021
Train your classifier first: Cascade Neural Networks Training from upper
  layers to lower layers
Train your classifier first: Cascade Neural Networks Training from upper layers to lower layers
Shucong Zhang
Cong-Thanh Do
R. Doddipatla
Erfan Loweimi
P. Bell
Steve Renals
24
2
0
09 Feb 2021
Training Deep Architectures Without End-to-End Backpropagation: A Survey
  on the Provably Optimal Methods
Training Deep Architectures Without End-to-End Backpropagation: A Survey on the Provably Optimal Methods
Shiyu Duan
José C. Príncipe
MQ
38
3
0
09 Jan 2021
LOss-Based SensiTivity rEgulaRization: towards deep sparse neural
  networks
LOss-Based SensiTivity rEgulaRization: towards deep sparse neural networks
Enzo Tartaglione
Andrea Bragagnolo
Attilio Fiandrotti
Marco Grangetto
ODL
UQCV
20
34
0
16 Nov 2020
Why Layer-Wise Learning is Hard to Scale-up and a Possible Solution via
  Accelerated Downsampling
Why Layer-Wise Learning is Hard to Scale-up and a Possible Solution via Accelerated Downsampling
Wenchi Ma
Miao Yu
Kaidong Li
Guanghui Wang
17
5
0
15 Oct 2020
Nonparametric Learning of Two-Layer ReLU Residual Units
Nonparametric Learning of Two-Layer ReLU Residual Units
Zhunxuan Wang
Linyun He
Chunchuan Lyu
Shay B. Cohen
MLT
OffRL
36
1
0
17 Aug 2020
Direct Feedback Alignment Scales to Modern Deep Learning Tasks and
  Architectures
Direct Feedback Alignment Scales to Modern Deep Learning Tasks and Architectures
Julien Launay
Iacopo Poli
Franccois Boniface
Florent Krzakala
41
63
0
23 Jun 2020
Kernelized information bottleneck leads to biologically plausible
  3-factor Hebbian learning in deep networks
Kernelized information bottleneck leads to biologically plausible 3-factor Hebbian learning in deep networks
Roman Pogodin
P. Latham
24
34
0
12 Jun 2020
Contrastive Similarity Matching for Supervised Learning
Contrastive Similarity Matching for Supervised Learning
Shanshan Qin
N. Mudur
Cengiz Pehlevan
SSL
DRL
19
1
0
24 Feb 2020
Large-Scale Gradient-Free Deep Learning with Recursive Local
  Representation Alignment
Large-Scale Gradient-Free Deep Learning with Recursive Local Representation Alignment
Alexander Ororbia
A. Mali
Daniel Kifer
C. Lee Giles
23
2
0
10 Feb 2020
Gated Linear Networks
Gated Linear Networks
William H. Guss
Tor Lattimore
David Budden
Avishkar Bhoopchand
Christopher Mattern
...
Ruslan Salakhutdinov
Jianan Wang
Peter Toth
Simon Schmitt
Marcus Hutter
AI4CE
18
40
0
30 Sep 2019
AdaGCN: Adaboosting Graph Convolutional Networks into Deep Models
AdaGCN: Adaboosting Graph Convolutional Networks into Deep Models
Ke Sun
Zhanxing Zhu
Zhouchen Lin
GNN
33
80
0
14 Aug 2019
Associated Learning: Decomposing End-to-end Backpropagation based on
  Auto-encoders and Target Propagation
Associated Learning: Decomposing End-to-end Backpropagation based on Auto-encoders and Target Propagation
Yu-Wei Kao
Hung-Hsuan Chen
BDL
20
5
0
13 Jun 2019
Putting An End to End-to-End: Gradient-Isolated Learning of
  Representations
Putting An End to End-to-End: Gradient-Isolated Learning of Representations
Sindy Löwe
Peter O'Connor
Bastiaan S. Veeling
SSL
11
143
0
28 May 2019
Is Deeper Better only when Shallow is Good?
Is Deeper Better only when Shallow is Good?
Eran Malach
Shai Shalev-Shwartz
28
45
0
08 Mar 2019
Decoupled Greedy Learning of CNNs
Decoupled Greedy Learning of CNNs
Eugene Belilovsky
Michael Eickenberg
Edouard Oyallon
11
114
0
23 Jan 2019
Training Neural Networks with Local Error Signals
Training Neural Networks with Local Error Signals
Arild Nøkland
L. Eidnes
32
226
0
20 Jan 2019
1