ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.04964
  4. Cited By
Learning Deep ResNet Blocks Sequentially using Boosting Theory

Learning Deep ResNet Blocks Sequentially using Boosting Theory

15 June 2017
Furong Huang
Jordan T. Ash
John Langford
Robert Schapire
ArXivPDFHTML

Papers citing "Learning Deep ResNet Blocks Sequentially using Boosting Theory"

23 / 23 papers shown
Title
ColA: Collaborative Adaptation with Gradient Learning
ColA: Collaborative Adaptation with Gradient Learning
Enmao Diao
Qi Le
Suya Wu
Xinran Wang
Ali Anwar
Jie Ding
Vahid Tarokh
35
1
0
22 Apr 2024
G-EvoNAS: Evolutionary Neural Architecture Search Based on Network
  Growth
G-EvoNAS: Evolutionary Neural Architecture Search Based on Network Growth
Juan Zou
Weiwei Jiang
Yizhang Xia
Yuan Liu
Zhanglu Hou
28
0
0
05 Mar 2024
Go beyond End-to-End Training: Boosting Greedy Local Learning with
  Context Supply
Go beyond End-to-End Training: Boosting Greedy Local Learning with Context Supply
Chengting Yu
Fengzhao Zhang
Hanzhi Ma
Aili Wang
Er-ping Li
29
1
0
12 Dec 2023
AudioFormer: Audio Transformer learns audio feature representations from discrete acoustic codes
Zhaohui Li
Haitao Wang
Xinghua Jiang
40
1
0
14 Aug 2023
A Gradient Boosting Approach for Training Convolutional and Deep Neural
  Networks
A Gradient Boosting Approach for Training Convolutional and Deep Neural Networks
S. Emami
Gonzalo Martínez-Munoz
20
6
0
22 Feb 2023
Local Learning on Transformers via Feature Reconstruction
Local Learning on Transformers via Feature Reconstruction
P. Pathak
Jingwei Zhang
Dimitris Samaras
ViT
24
5
0
29 Dec 2022
Boosted Dynamic Neural Networks
Boosted Dynamic Neural Networks
Haichao Yu
Haoxiang Li
G. Hua
Gao Huang
Humphrey Shi
35
7
0
30 Nov 2022
Block-wise Training of Residual Networks via the Minimizing Movement
  Scheme
Block-wise Training of Residual Networks via the Minimizing Movement Scheme
Skander Karkar
Ibrahim Ayed
Emmanuel de Bézenac
Patrick Gallinari
33
1
0
03 Oct 2022
Building Robust Ensembles via Margin Boosting
Building Robust Ensembles via Margin Boosting
Dinghuai Zhang
Hongyang R. Zhang
Aaron Courville
Yoshua Bengio
Pradeep Ravikumar
A. Suggala
AAML
UQCV
48
15
0
07 Jun 2022
Training Deep Architectures Without End-to-End Backpropagation: A Survey
  on the Provably Optimal Methods
Training Deep Architectures Without End-to-End Backpropagation: A Survey on the Provably Optimal Methods
Shiyu Duan
José C. Príncipe
MQ
38
3
0
09 Jan 2021
Generalized Negative Correlation Learning for Deep Ensembling
Generalized Negative Correlation Learning for Deep Ensembling
Sebastian Buschjäger
Lukas Pfahler
K. Morik
FedML
BDL
UQCV
11
17
0
05 Nov 2020
Why Layer-Wise Learning is Hard to Scale-up and a Possible Solution via
  Accelerated Downsampling
Why Layer-Wise Learning is Hard to Scale-up and a Possible Solution via Accelerated Downsampling
Wenchi Ma
Miao Yu
Kaidong Li
Guanghui Wang
14
5
0
15 Oct 2020
Optimization and Generalization Analysis of Transduction through
  Gradient Boosting and Application to Multi-scale Graph Neural Networks
Optimization and Generalization Analysis of Transduction through Gradient Boosting and Application to Multi-scale Graph Neural Networks
Kenta Oono
Taiji Suzuki
AI4CE
37
31
0
15 Jun 2020
Gradient Boosting Neural Networks: GrowNet
Gradient Boosting Neural Networks: GrowNet
Sarkhan Badirli
Xuanqing Liu
Zhengming Xing
Avradeep Bhowmik
Khoa D. Doan
S. Keerthi
FedML
22
83
0
19 Feb 2020
Sub-Architecture Ensemble Pruning in Neural Architecture Search
Sub-Architecture Ensemble Pruning in Neural Architecture Search
Yijun Bian
Qingquan Song
Mengnan Du
Jun Yao
Huanhuan Chen
Xia Hu
17
14
0
01 Oct 2019
Sequential Training of Neural Networks with Gradient Boosting
Sequential Training of Neural Networks with Gradient Boosting
S. Emami
Gonzalo Martýnez-Muñoz
ODL
21
19
0
26 Sep 2019
Residual Networks Behave Like Boosting Algorithms
Residual Networks Behave Like Boosting Algorithms
Chapman Siu
30
9
0
25 Sep 2019
AdaGCN: Adaboosting Graph Convolutional Networks into Deep Models
AdaGCN: Adaboosting Graph Convolutional Networks into Deep Models
Ke Sun
Zhanxing Zhu
Zhouchen Lin
GNN
33
80
0
14 Aug 2019
Learning From Noisy Labels By Regularized Estimation Of Annotator
  Confusion
Learning From Noisy Labels By Regularized Estimation Of Annotator Confusion
Ryutaro Tanno
A. Saeedi
S. Sankaranarayanan
Daniel C. Alexander
N. Silberman
NoLa
27
228
0
10 Feb 2019
Decoupled Greedy Learning of CNNs
Decoupled Greedy Learning of CNNs
Eugene Belilovsky
Michael Eickenberg
Edouard Oyallon
8
114
0
23 Jan 2019
Binary Ensemble Neural Network: More Bits per Network or More Networks
  per Bit?
Binary Ensemble Neural Network: More Bits per Network or More Networks per Bit?
Shilin Zhu
Xin Dong
Hao Su
MQ
30
135
0
20 Jun 2018
Auto-Meta: Automated Gradient Based Meta Learner Search
Auto-Meta: Automated Gradient Based Meta Learner Search
Jaehong Kim
Sangyeul Lee
Sungwan Kim
Moonsu Cha
Jung Kwon Lee
Youngduck Choi
Yongseok Choi
Dong-Yeon Cho
Jiwon Kim
AI4CE
33
39
0
11 Jun 2018
Functional Gradient Boosting based on Residual Network Perception
Functional Gradient Boosting based on Residual Network Perception
Atsushi Nitanda
Taiji Suzuki
25
25
0
25 Feb 2018
1