ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.01026
  4. Cited By
NodeDrop: A Condition for Reducing Network Size without Effect on Output
v1v2 (latest)

NodeDrop: A Condition for Reducing Network Size without Effect on Output

3 June 2019
Louis Jensen
Jacob A. Harer
S. Chin
ArXiv (abs)PDFHTML

Papers citing "NodeDrop: A Condition for Reducing Network Size without Effect on Output"

13 / 13 papers shown
Title
Learning Efficient Convolutional Networks through Network Slimming
Learning Efficient Convolutional Networks through Network Slimming
Zhuang Liu
Jianguo Li
Zhiqiang Shen
Gao Huang
Shoumeng Yan
Changshui Zhang
125
2,426
0
22 Aug 2017
Variational Dropout Sparsifies Deep Neural Networks
Variational Dropout Sparsifies Deep Neural Networks
Dmitry Molchanov
Arsenii Ashukha
Dmitry Vetrov
BDL
150
831
0
19 Jan 2017
Pruning Filters for Efficient ConvNets
Pruning Filters for Efficient ConvNets
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
3DPC
195
3,705
0
31 Aug 2016
Learning Structured Sparsity in Deep Neural Networks
Learning Structured Sparsity in Deep Neural Networks
W. Wen
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
187
2,341
0
12 Aug 2016
Layer Normalization
Layer Normalization
Jimmy Lei Ba
J. Kiros
Geoffrey E. Hinton
423
10,531
0
21 Jul 2016
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural
  Networks
XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
Mohammad Rastegari
Vicente Ordonez
Joseph Redmon
Ali Farhadi
MQ
175
4,369
0
16 Mar 2016
Sparsifying Neural Network Connections for Face Recognition
Sparsifying Neural Network Connections for Face Recognition
Yi Sun
Xiaogang Wang
Xiaoou Tang
3DHCVBM
61
141
0
07 Dec 2015
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
  Quantization and Huffman Coding
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
263
8,859
0
01 Oct 2015
Learning both Weights and Connections for Efficient Neural Networks
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
313
6,700
0
08 Jun 2015
Fast ConvNets Using Group-wise Brain Damage
Fast ConvNets Using Group-wise Brain Damage
V. Lebedev
Victor Lempitsky
AAML
184
449
0
08 Jun 2015
Deep Learning with Limited Numerical Precision
Deep Learning with Limited Numerical Precision
Suyog Gupta
A. Agrawal
K. Gopalakrishnan
P. Narayanan
HAI
207
2,049
0
09 Feb 2015
Compressing Deep Convolutional Networks using Vector Quantization
Compressing Deep Convolutional Networks using Vector Quantization
Yunchao Gong
Liu Liu
Ming Yang
Lubomir D. Bourdev
MQ
174
1,171
0
18 Dec 2014
Exploiting Linear Structure Within Convolutional Networks for Efficient
  Evaluation
Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation
Emily L. Denton
Wojciech Zaremba
Joan Bruna
Yann LeCun
Rob Fergus
FAtt
179
1,693
0
02 Apr 2014
1