Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1905.01067
Cited By
v1
v2
v3
v4 (latest)
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
3 May 2019
Hattie Zhou
Janice Lan
Rosanne Liu
J. Yosinski
UQCV
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask"
50 / 250 papers shown
Title
Pruning Randomly Initialized Neural Networks with Iterative Randomization
Daiki Chijiwa
Shin'ya Yamaguchi
Yasutoshi Ida
Kenji Umakoshi
T. Inoue
64
26
0
17 Jun 2021
Towards Understanding Iterative Magnitude Pruning: Why Lottery Tickets Win
Jaron Maene
Mingxiao Li
Marie-Francine Moens
60
15
0
13 Jun 2021
Heavy Tails in SGD and Compressibility of Overparametrized Neural Networks
Melih Barsbey
Romain Chor
Murat A. Erdogdu
Gaël Richard
Umut Simsekli
73
41
0
07 Jun 2021
Top-KAST: Top-K Always Sparse Training
Siddhant M. Jayakumar
Razvan Pascanu
Jack W. Rae
Simon Osindero
Erich Elsen
181
100
0
07 Jun 2021
Efficient Lottery Ticket Finding: Less Data is More
Zhenyu Zhang
Xuxi Chen
Tianlong Chen
Zhangyang Wang
111
54
0
06 Jun 2021
Can Subnetwork Structure be the Key to Out-of-Distribution Generalization?
Dinghuai Zhang
Kartik Ahuja
Yilun Xu
Yisen Wang
Aaron Courville
OOD
97
96
0
05 Jun 2021
Super Tickets in Pre-Trained Language Models: From Model Compression to Improving Generalization
Chen Liang
Simiao Zuo
Minshuo Chen
Haoming Jiang
Xiaodong Liu
Pengcheng He
T. Zhao
Weizhu Chen
59
69
0
25 May 2021
A Probabilistic Approach to Neural Network Pruning
Xin-Yao Qian
Diego Klabjan
91
17
0
20 May 2021
Adapting by Pruning: A Case Study on BERT
Yang Gao
Nicolo Colombo
Wen Wang
49
17
0
07 May 2021
Effective Sparsification of Neural Networks with Global Sparsity Constraint
Xiao Zhou
Weizhong Zhang
Hang Xu
Tong Zhang
154
63
0
03 May 2021
Sifting out the features by pruning: Are convolutional networks the winning lottery ticket of fully connected ones?
Franco Pellegrini
Giulio Biroli
109
6
0
27 Apr 2021
Communication-Efficient and Personalized Federated Lottery Ticket Learning
Sejin Seo
Seung-Woo Ko
Jihong Park
Seong-Lyun Kim
M. Bennis
FedML
88
15
0
26 Apr 2021
Lottery Jackpots Exist in Pre-trained Models
Yuxin Zhang
Mingbao Lin
Yan Wang
Chia-Wen Lin
Rongrong Ji
95
16
0
18 Apr 2021
The Impact of Activation Sparsity on Overfitting in Convolutional Neural Networks
Karim Huesmann
Luis Garcia Rodriguez
Lars Linsen
Benjamin Risse
45
4
0
13 Apr 2021
Charged particle tracking via edge-classifying interaction networks
G. Dezoort
S. Thais
Javier Mauricio Duarte
Vesal Razavimaleki
M. Atkinson
I. Ojalvo
Mark S. Neubauer
P. Elmer
88
49
0
30 Mar 2021
The Elastic Lottery Ticket Hypothesis
Xiaohan Chen
Yu Cheng
Shuohang Wang
Zhe Gan
Jingjing Liu
Zhangyang Wang
OOD
81
34
0
30 Mar 2021
Self-Constructing Neural Networks Through Random Mutation
Samuel Schmidgall
ODL
3DV
35
1
0
29 Mar 2021
Cascade Weight Shedding in Deep Neural Networks: Benefits and Pitfalls for Network Pruning
K. Azarian
Fatih Porikli
CVBM
48
0
0
19 Mar 2021
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network
James Diffenderfer
B. Kailkhura
MQ
97
76
0
17 Mar 2021
Efficient Sparse Artificial Neural Networks
Seyed Majid Naji
Azra Abtahi
F. Marvasti
49
3
0
13 Mar 2021
Recent Advances on Neural Network Pruning at Initialization
Huan Wang
Can Qin
Yue Bai
Yulun Zhang
Yun Fu
CVBM
96
67
0
11 Mar 2021
Trainless Model Performance Estimation for Neural Architecture Search
Ekaterina Gracheva
34
3
0
10 Mar 2021
Knowledge Evolution in Neural Networks
Ahmed Taha
Abhinav Shrivastava
L. Davis
87
22
0
09 Mar 2021
Artificial Neural Networks generated by Low Discrepancy Sequences
A. Keller
Matthijs Van Keirsbilck
38
5
0
05 Mar 2021
Clusterability in Neural Networks
Daniel Filan
Stephen Casper
Shlomi Hod
Cody Wild
Andrew Critch
Stuart J. Russell
GNN
66
32
0
04 Mar 2021
Ps and Qs: Quantization-aware pruning for efficient low latency neural network inference
B. Hawks
Javier Mauricio Duarte
Nicholas J. Fraser
Alessandro Pappalardo
N. Tran
Yaman Umuroglu
MQ
69
51
0
22 Feb 2021
Truly Sparse Neural Networks at Scale
Selima Curci
Decebal Constantin Mocanu
Mykola Pechenizkiy
141
22
0
02 Feb 2021
Slot Machines: Discovering Winning Combinations of Random Weights in Neural Networks
Maxwell Mbabilla Aladago
Lorenzo Torresani
54
10
0
16 Jan 2021
EarlyBERT: Efficient BERT Training via Early-bird Lottery Tickets
Xiaohan Chen
Yu Cheng
Shuohang Wang
Zhe Gan
Zhangyang Wang
Jingjing Liu
116
100
0
31 Dec 2020
Reservoir Transformers
Sheng Shen
Alexei Baevski
Ari S. Morcos
Kurt Keutzer
Michael Auli
Douwe Kiela
88
18
0
30 Dec 2020
Provable Benefits of Overparameterization in Model Compression: From Double Descent to Pruning Neural Networks
Xiangyu Chang
Yingcong Li
Samet Oymak
Christos Thrampoulidis
86
51
0
16 Dec 2020
The Lottery Ticket Hypothesis for Object Recognition
Sharath Girish
Shishira R. Maiya
Kamal Gupta
Hao Chen
L. Davis
Abhinav Shrivastava
144
61
0
08 Dec 2020
Effect of the initial configuration of weights on the training and function of artificial neural networks
Ricardo J. Jesus
Mário Antunes
R. A. D. Costa
S. Dorogovtsev
J. F. F. Mendes
R. Aguiar
69
15
0
04 Dec 2020
An Once-for-All Budgeted Pruning Framework for ConvNets Considering Input Resolution
Wenyu Sun
Jian Cao
Pengtao Xu
Xiangcheng Liu
Pu Li
36
0
0
02 Dec 2020
Deconstructing the Structure of Sparse Neural Networks
M. V. Gelder
Mitchell Wortsman
Kiana Ehsani
28
1
0
30 Nov 2020
FreezeNet: Full Performance by Reduced Storage Costs
Paul Wimmer
Jens Mehnert
Alexandru Paul Condurache
79
13
0
28 Nov 2020
Rethinking Weight Decay For Efficient Neural Network Pruning
Hugo Tessier
Vincent Gripon
Mathieu Léonardon
M. Arzel
T. Hannagan
David Bertrand
104
26
0
20 Nov 2020
Observation Space Matters: Benchmark and Optimization Algorithm
J. Kim
Sehoon Ha
OOD
OffRL
49
11
0
02 Nov 2020
Methods for Pruning Deep Neural Networks
S. Vadera
Salem Ameen
3DPC
73
130
0
31 Oct 2020
Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win
Utku Evci
Yani Andrew Ioannou
Cem Keskin
Yann N. Dauphin
63
94
0
07 Oct 2020
Winning Lottery Tickets in Deep Generative Models
Neha Kalibhat
Yogesh Balaji
Soheil Feizi
WIGM
94
42
0
05 Oct 2020
Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks
Róbert Csordás
Sjoerd van Steenkiste
Jürgen Schmidhuber
102
97
0
05 Oct 2020
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot
Jingtong Su
Yihang Chen
Tianle Cai
Tianhao Wu
Ruiqi Gao
Liwei Wang
Jason D. Lee
73
86
0
22 Sep 2020
Disentangling Neural Architectures and Weights: A Case Study in Supervised Classification
Nicolo Colombo
Yang Gao
45
2
0
11 Sep 2020
Prune Responsibly
Michela Paganini
VLM
73
21
0
10 Sep 2020
It's Hard for Neural Networks To Learn the Game of Life
Jacob Mitchell Springer
Garrett Kenyon
85
21
0
03 Sep 2020
Against Membership Inference Attack: Pruning is All You Need
Yijue Wang
Chenghong Wang
Zigeng Wang
Shangli Zhou
Hang Liu
J. Bi
Caiwen Ding
Sanguthevar Rajasekaran
MIACV
147
49
0
28 Aug 2020
HALO: Learning to Prune Neural Networks with Shrinkage
Skyler Seto
M. Wells
Wenyu Zhang
52
0
0
24 Aug 2020
Training Sparse Neural Networks using Compressed Sensing
Jonathan W. Siegel
Jianhong Chen
Pengchuan Zhang
Jinchao Xu
82
5
0
21 Aug 2020
Lottery Tickets in Linear Models: An Analysis of Iterative Magnitude Pruning
Bryn Elesedy
Varun Kanade
Yee Whye Teh
92
30
0
16 Jul 2020
Previous
1
2
3
4
5
Next