Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1611.06321
Cited By
Learning the Number of Neurons in Deep Networks
19 November 2016
J. Álvarez
Mathieu Salzmann
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Learning the Number of Neurons in Deep Networks"
50 / 204 papers shown
Title
Distilling Image Classifiers in Object Detectors
Shuxuan Guo
J. Álvarez
Mathieu Salzmann
VLM
30
8
0
09 Jun 2021
Feature Flow Regularization: Improving Structured Sparsity in Deep Neural Networks
Yue Wu
Yuan Lan
Luchan Zhang
Yang Xiang
17
6
0
05 Jun 2021
Regularization and Reparameterization Avoid Vanishing Gradients in Sigmoid-Type Networks
Leni Ven
Johannes Lederer
ODL
15
5
0
04 Jun 2021
Neural Network Training Using
ℓ
1
\ell_1
ℓ
1
-Regularization and Bi-fidelity Data
Subhayan De
Alireza Doostan
29
24
0
27 May 2021
Schematic Memory Persistence and Transience for Efficient and Robust Continual Learning
Yuyang Gao
Giorgio Ascoli
Liang Zhao
27
4
0
05 May 2021
Alternate Model Growth and Pruning for Efficient Training of Recommendation Systems
Xiaocong Du
Bhargav Bhushanam
Jiecao Yu
Dhruv Choudhary
Tianxiang Gao
Sherman Wong
Louis Feng
Jongsoo Park
Yu Cao
A. Kejariwal
34
5
0
04 May 2021
Stealthy Backdoors as Compression Artifacts
Yulong Tian
Fnu Suya
Fengyuan Xu
David Evans
35
22
0
30 Apr 2021
Modeling Ideological Salience and Framing in Polarized Online Groups with Graph Neural Networks and Structured Sparsity
Valentin Hofmann
Xiaowen Dong
J. Pierrehumbert
Hinrich Schütze
21
14
0
18 Apr 2021
Distilling Object Detectors via Decoupled Features
Jianyuan Guo
Kai Han
Yunhe Wang
Han Wu
Xinghao Chen
Chunjing Xu
Chang Xu
43
199
0
26 Mar 2021
Reframing Neural Networks: Deep Structure in Overcomplete Representations
Calvin Murdock
George Cazenavette
Simon Lucey
BDL
41
4
0
10 Mar 2021
Consistent Sparse Deep Learning: Theory and Computation
Y. Sun
Qifan Song
F. Liang
BDL
43
27
0
25 Feb 2021
ChipNet: Budget-Aware Pruning with Heaviside Continuous Approximations
Rishabh Tiwari
Udbhav Bamba
Arnav Chavan
D. K. Gupta
17
31
0
14 Feb 2021
AACP: Model Compression by Accurate and Automatic Channel Pruning
Lanbo Lin
Yujiu Yang
Zhenhua Guo
MQ
22
12
0
31 Jan 2021
Deep Model Compression based on the Training History
S. H. Shabbeer Basha
M. Farazuddin
Viswanath Pulabaigari
S. Dubey
Snehasis Mukherjee
VLM
30
17
0
30 Jan 2021
Machine Learning for the Detection and Identification of Internet of Things (IoT) Devices: A Survey
Yongxin Liu
Jian Wang
Jianqiang Li
Shuteng Niu
Haoze Song
38
131
0
25 Jan 2021
On tuning deep learning models: a data mining perspective
M. Öztürk
13
0
0
19 Nov 2020
Channel Planting for Deep Neural Networks using Knowledge Distillation
Kakeru Mitsuno
Yuichiro Nomura
Takio Kurita
36
2
0
04 Nov 2020
Filter Pruning using Hierarchical Group Sparse Regularization for Deep Convolutional Neural Networks
Kakeru Mitsuno
Takio Kurita
16
6
0
04 Nov 2020
Adaptive Dense-to-Sparse Paradigm for Pruning Online Recommendation System with Non-Stationary Data
Mao Ye
Dhruv Choudhary
Jiecao Yu
Ellie Wen
Zeliang Chen
Jiyan Yang
Jongsoo Park
Qiang Liu
A. Kejariwal
29
9
0
16 Oct 2020
Improving Network Slimming with Nonconvex Regularization
Kevin Bui
Fredrick Park
Shuai Zhang
Y. Qi
Jack Xin
21
9
0
03 Oct 2020
Self-grouping Convolutional Neural Networks
Qingbei Guo
Xiaojun Wu
J. Kittler
Zhiquan Feng
25
22
0
29 Sep 2020
Pruning Convolutional Filters using Batch Bridgeout
Najeeb Khan
Ian Stavness
28
3
0
23 Sep 2020
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot
Jingtong Su
Yihang Chen
Tianle Cai
Tianhao Wu
Ruiqi Gao
Liwei Wang
J. Lee
14
85
0
22 Sep 2020
3D_DEN: Open-ended 3D Object Recognition using Dynamically Expandable Networks
Sudhakaran Jain
H. Kasaei
3DPC
6
4
0
15 Sep 2020
A Progressive Sub-Network Searching Framework for Dynamic Inference
Li Yang
Zhezhi He
Yu Cao
Deliang Fan
AI4CE
20
6
0
11 Sep 2020
A Partial Regularization Method for Network Compression
E. Zhenqian
Weiguo Gao
16
0
0
03 Sep 2020
Training Sparse Neural Networks using Compressed Sensing
Jonathan W. Siegel
Jianhong Chen
Pengchuan Zhang
Jinchao Xu
26
5
0
21 Aug 2020
Neural Architecture Search as Sparse Supernet
Yunsheng Wu
Aoming Liu
Zhiwu Huang
Siwei Zhang
Luc Van Gool
25
22
0
31 Jul 2020
Deep Learning Methods for Solving Linear Inverse Problems: Research Directions and Paradigms
Yanna Bai
Wei Chen
Jie Chen
Weisi Guo
40
66
0
27 Jul 2020
T-Basis: a Compact Representation for Neural Networks
Anton Obukhov
M. Rakhuba
Stamatios Georgoulis
Menelaos Kanakis
Dengxin Dai
Luc Van Gool
41
27
0
13 Jul 2020
ResRep: Lossless CNN Pruning via Decoupling Remembering and Forgetting
Xiaohan Ding
Tianxiang Hao
Jianchao Tan
Ji Liu
Jungong Han
Yuchen Guo
Guiguang Ding
23
163
0
07 Jul 2020
DessiLBI: Exploring Structural Sparsity of Deep Networks via Differential Inclusion Paths
Yanwei Fu
Chen Liu
Donghao Li
Xinwei Sun
Jinshan Zeng
Yuan Yao
14
9
0
04 Jul 2020
Efficient Proximal Mapping of the 1-path-norm of Shallow Networks
Fabian Latorre
Paul Rolland
Nadav Hallak
V. Cevher
AAML
26
4
0
02 Jul 2020
Layer Sparsity in Neural Networks
Mohamed Hebiri
Johannes Lederer
36
10
0
28 Jun 2020
Embedding Differentiable Sparsity into Deep Neural Network
Yongjin Lee
21
0
0
23 Jun 2020
Dynamic Model Pruning with Feedback
Tao R. Lin
Sebastian U. Stich
Luis Barba
Daniil Dmitriev
Martin Jaggi
32
199
0
12 Jun 2020
ADMP: An Adversarial Double Masks Based Pruning Framework For Unsupervised Cross-Domain Compression
Xiaoyu Feng
Zhuqing Yuan
Guijin Wang
Yongpan Liu
25
5
0
07 Jun 2020
Pruning via Iterative Ranking of Sensitivity Statistics
Stijn Verdenius
M. Stol
Patrick Forré
AAML
21
37
0
01 Jun 2020
PruneNet: Channel Pruning via Global Importance
A. Khetan
Zohar Karnin
18
11
0
22 May 2020
Generalized Bayesian Posterior Expectation Distillation for Deep Neural Networks
Meet P. Vadera
B. Jalaeian
Benjamin M. Marlin
BDL
FedML
UQCV
17
20
0
16 May 2020
Out-of-the-box channel pruned networks
Ragav Venkatesan
Gurumurthy Swaminathan
Xiong Zhou
Anna Luo
20
0
0
30 Apr 2020
Do We Need Fully Connected Output Layers in Convolutional Networks?
Zhongchao Qian
Tyler L. Hayes
Kushal Kafle
Christopher Kanan
9
8
0
28 Apr 2020
torchgpipe: On-the-fly Pipeline Parallelism for Training Giant Models
Chiheon Kim
Heungsub Lee
Myungryong Jeong
Woonhyuk Baek
Boogeon Yoon
Ildoo Kim
Sungbin Lim
Sungwoong Kim
MoE
AI4CE
16
53
0
21 Apr 2020
Hierarchical Group Sparse Regularization for Deep Convolutional Neural Networks
Kakeru Mitsuno
J. Miyao
Takio Kurita
21
16
0
09 Apr 2020
Dataless Model Selection with the Deep Frame Potential
Calvin Murdock
Simon Lucey
41
6
0
30 Mar 2020
Continual Learning with Node-Importance based Adaptive Group Sparse Regularization
Sangwon Jung
Hongjoon Ahn
Sungmin Cha
Taesup Moon
CLL
25
120
0
30 Mar 2020
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression
Yawei Li
Shuhang Gu
Christoph Mayer
Luc Van Gool
Radu Timofte
139
189
0
19 Mar 2020
Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism Principled Robust Deep Neural Nets
Thu Dinh
Bao Wang
Andrea L. Bertozzi
Stanley J. Osher
AAML
14
16
0
02 Mar 2020
An Equivalence between Bayesian Priors and Penalties in Variational Inference
Pierre Wolinski
Guillaume Charpiat
Yann Ollivier
BDL
17
1
0
01 Feb 2020
Filter Sketch for Network Pruning
Mingbao Lin
Liujuan Cao
Shaojie Li
QiXiang Ye
Yonghong Tian
Jianzhuang Liu
Q. Tian
Rongrong Ji
CLIP
3DPC
31
82
0
23 Jan 2020
Previous
1
2
3
4
5
Next