ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1511.06951
  4. Cited By
Gradual DropIn of Layers to Train Very Deep Neural Networks

Gradual DropIn of Layers to Train Very Deep Neural Networks

22 November 2015
L. Smith
Emily M. Hand
T. Doster
    AI4CE
ArXivPDFHTML

Papers citing "Gradual DropIn of Layers to Train Very Deep Neural Networks"

12 / 12 papers shown
Title
On Efficient Training of Large-Scale Deep Learning Models: A Literature
  Review
On Efficient Training of Large-Scale Deep Learning Models: A Literature Review
Li Shen
Yan Sun
Zhiyuan Yu
Liang Ding
Xinmei Tian
Dacheng Tao
VLM
30
41
0
07 Apr 2023
A Survey on Dropout Methods and Experimental Verification in
  Recommendation
A Survey on Dropout Methods and Experimental Verification in Recommendation
Yong Li
Weizhi Ma
C. L. Philip Chen
Hao Fei
Yiqun Liu
Shaoping Ma
Yue Yang
33
9
0
05 Apr 2022
Automated Progressive Learning for Efficient Training of Vision
  Transformers
Automated Progressive Learning for Efficient Training of Vision Transformers
Changlin Li
Bohan Zhuang
Guangrun Wang
Xiaodan Liang
Xiaojun Chang
Yi Yang
28
46
0
28 Mar 2022
Dynamic Hierarchical Mimicking Towards Consistent Optimization Objectives
Duo Li
Qifeng Chen
153
19
0
24 Mar 2020
Effective Training of Convolutional Neural Networks with Low-bitwidth
  Weights and Activations
Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and Activations
Bohan Zhuang
Jing Liu
Mingkui Tan
Lingqiao Liu
Ian Reid
Chunhua Shen
MQ
29
44
0
10 Aug 2019
Effective and Efficient Dropout for Deep Convolutional Neural Networks
Effective and Efficient Dropout for Deep Convolutional Neural Networks
Shaofeng Cai
Jinyang Gao
Gang Chen
Beng Chin Ooi
Wei Wang
Meihui Zhang
BDL
18
53
0
06 Apr 2019
AccUDNN: A GPU Memory Efficient Accelerator for Training Ultra-deep
  Neural Networks
AccUDNN: A GPU Memory Efficient Accelerator for Training Ultra-deep Neural Networks
Jinrong Guo
Wantao Liu
Wang Wang
Q. Lu
Songlin Hu
Jizhong Han
Ruixuan Li
16
9
0
21 Jan 2019
Gradual Learning of Recurrent Neural Networks
Gradual Learning of Recurrent Neural Networks
Ziv Aharoni
Gal Rattner
Haim Permuter
AI4CE
27
4
0
29 Aug 2017
Mollifying Networks
Mollifying Networks
Çağlar Gülçehre
Marcin Moczulski
Francesco Visin
Yoshua Bengio
23
46
0
17 Aug 2016
Regularization for Unsupervised Deep Neural Nets
Regularization for Unsupervised Deep Neural Nets
Baiyang Wang
Diego Klabjan
BDL
23
25
0
15 Aug 2016
Deep Networks with Stochastic Depth
Deep Networks with Stochastic Depth
Gao Huang
Yu Sun
Zhuang Liu
Daniel Sedra
Kilian Q. Weinberger
74
2,336
0
30 Mar 2016
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,638
0
03 Jul 2012
1