Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2102.03773
Cited By
SeReNe: Sensitivity based Regularization of Neurons for Structured Sparsity in Neural Networks
7 February 2021
Enzo Tartaglione
Andrea Bragagnolo
Francesco Odierna
Attilio Fiandrotti
Marco Grangetto
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SeReNe: Sensitivity based Regularization of Neurons for Structured Sparsity in Neural Networks"
26 / 26 papers shown
Title
Playing the Lottery With Concave Regularizers for Sparse Trainable Neural Networks
Giulia Fracastoro
Sophie M. Fosson
Andrea Migliorati
G. Calafiore
165
1
0
19 Jan 2025
Pruning artificial neural networks: a way to find well-generalizing, high-entropy sharp minima
Enzo Tartaglione
Andrea Bragagnolo
Marco Grangetto
58
12
0
30 Apr 2020
Channel Pruning via Automatic Structure Search
Mingbao Lin
Rongrong Ji
Yuxin Zhang
Baochang Zhang
Yongjian Wu
Yonghong Tian
88
243
0
23 Jan 2020
Pruning from Scratch
Yulong Wang
Xiaolu Zhang
Lingxi Xie
Jun Zhou
Hang Su
Bo Zhang
Xiaolin Hu
58
194
0
27 Sep 2019
Post-synaptic potential regularization has potential
Enzo Tartaglione
Daniele Perlo
Marco Grangetto
BDL
AAML
48
6
0
19 Jul 2019
Learning Sparse Networks Using Targeted Dropout
Aidan Gomez
Ivan Zhang
Siddhartha Rao Kamalakara
Divyam Madaan
Kevin Swersky
Y. Gal
Geoffrey E. Hinton
71
98
0
31 May 2019
The State of Sparsity in Deep Neural Networks
Trevor Gale
Erich Elsen
Sara Hooker
161
758
0
25 Feb 2019
Learning Sparse Neural Networks via Sensitivity-Driven Regularization
Enzo Tartaglione
S. Lepsøy
Attilio Fiandrotti
Gianluca Francini
53
71
0
28 Oct 2018
SNIP: Single-shot Network Pruning based on Connection Sensitivity
Namhoon Lee
Thalaiyasingam Ajanthan
Philip Torr
VLM
257
1,205
0
04 Oct 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
233
3,473
0
09 Mar 2018
Learning Sparse Neural Networks through
L
0
L_0
L
0
Regularization
Christos Louizos
Max Welling
Diederik P. Kingma
430
1,144
0
04 Dec 2017
To prune, or not to prune: exploring the efficacy of pruning for model compression
Michael Zhu
Suyog Gupta
194
1,276
0
05 Oct 2017
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
Han Xiao
Kashif Rasul
Roland Vollgraf
283
8,883
0
25 Aug 2017
Empirical Analysis of the Hessian of Over-Parametrized Neural Networks
Levent Sagun
Utku Evci
V. U. Güney
Yann N. Dauphin
Léon Bottou
54
418
0
14 Jun 2017
Soft Weight-Sharing for Neural Network Compression
Karen Ullrich
Edward Meeds
Max Welling
167
417
0
13 Feb 2017
Variational Dropout Sparsifies Deep Neural Networks
Dmitry Molchanov
Arsenii Ashukha
Dmitry Vetrov
BDL
141
830
0
19 Jan 2017
Dynamic Network Surgery for Efficient DNNs
Yiwen Guo
Anbang Yao
Yurong Chen
81
1,059
0
16 Aug 2016
Learning Structured Sparsity in Deep Neural Networks
W. Wen
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
178
2,339
0
12 Aug 2016
Quantized Convolutional Neural Networks for Mobile Devices
Jiaxiang Wu
Cong Leng
Yuhang Wang
Qinghao Hu
Jian Cheng
MQ
85
1,166
0
21 Dec 2015
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
194,020
0
10 Dec 2015
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
Nicolas Papernot
Patrick McDaniel
Xi Wu
S. Jha
A. Swami
AAML
102
3,072
0
14 Nov 2015
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
313
6,681
0
08 Jun 2015
Variational Dropout and the Local Reparameterization Trick
Diederik P. Kingma
Tim Salimans
Max Welling
BDL
226
1,514
0
08 Jun 2015
Fast ConvNets Using Group-wise Brain Damage
V. Lebedev
Victor Lempitsky
AAML
175
449
0
08 Jun 2015
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
362
19,660
0
09 Mar 2015
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAtt
MDE
1.6K
100,386
0
04 Sep 2014
1