Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1905.01067
Cited By
v1
v2
v3
v4 (latest)
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
3 May 2019
Hattie Zhou
Janice Lan
Rosanne Liu
J. Yosinski
UQCV
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask"
50 / 250 papers shown
Title
Sparse Weight Averaging with Multiple Particles for Iterative Magnitude Pruning
Moonseok Choi
Hyungi Lee
G. Nam
Juho Lee
78
2
0
24 May 2023
Probabilistic Modeling: Proving the Lottery Ticket Hypothesis in Spiking Neural Network
Man Yao
Yu-Liang Chou
Guangshe Zhao
Xiawu Zheng
Yonghong Tian
Boxing Xu
Guoqi Li
68
4
0
20 May 2023
PDP: Parameter-free Differentiable Pruning is All You Need
Minsik Cho
Saurabh N. Adya
Devang Naik
VLM
67
12
0
18 May 2023
Sharing Lifelong Reinforcement Learning Knowledge via Modulating Masks
Saptarshi Nath
Christos Peridis
Eseoghene Ben-Iwhiwhu
Xinran Liu
Shirin Dora
Cong Liu
Soheil Kolouri
Andrea Soltoggio
CLL
76
10
0
18 May 2023
Learning Activation Functions for Sparse Neural Networks
Mohammad Loni
Aditya Mohan
Mehdi Asadi
Marius Lindauer
82
4
0
18 May 2023
Rethinking Graph Lottery Tickets: Graph Sparsity Matters
Bo Hui
Jocelyn M Mora
Adrian Dalca
I. Aganj
110
24
0
03 May 2023
Concept-Monitor: Understanding DNN training through individual neurons
Mohammad Ali Khan
Tuomas P. Oikarinen
Tsui-Wei Weng
96
2
0
26 Apr 2023
Simulated Annealing in Early Layers Leads to Better Generalization
Amirm. Sarfi
Zahra Karimpour
Muawiz Chaudhary
N. Khalid
Mirco Ravanelli
Sudhir Mudur
Eugene Belilovsky
AI4CE
CLL
76
9
0
10 Apr 2023
Polarity is all you need to learn and transfer faster
Qingyang Wang
Michael A. Powell
Ali Geisa
Eric W. Bridgeford
Joshua T. Vogelstein
70
3
0
29 Mar 2023
Forget-free Continual Learning with Soft-Winning SubNetworks
Haeyong Kang
Jaehong Yoon
Sultan Rizky Hikmawan Madjid
Sung Ju Hwang
Chang D. Yoo
CLL
105
4
0
27 Mar 2023
Sparse-IFT: Sparse Iso-FLOP Transformations for Maximizing Training Efficiency
Vithursan Thangarasa
Shreyas Saxena
Abhay Gupta
Sean Lie
108
5
0
21 Mar 2023
Modular Deep Learning
Jonas Pfeiffer
Sebastian Ruder
Ivan Vulić
Edoardo Ponti
MoMe
OOD
159
80
0
22 Feb 2023
Considering Layerwise Importance in the Lottery Ticket Hypothesis
Benjamin Vandersmissen
José Oramas
62
1
0
22 Feb 2023
Quantum Neuron Selection: Finding High Performing Subnetworks With Quantum Algorithms
Tim Whitaker
62
1
0
12 Feb 2023
Exploiting Sparsity in Pruned Neural Networks to Optimize Large Model Training
Siddharth Singh
A. Bhatele
69
9
0
10 Feb 2023
Quantum Ridgelet Transform: Winning Lottery Ticket of Neural Networks with Quantum Computation
H. Yamasaki
Sathyawageeswar Subramanian
Satoshi Hayakawa
Sho Sonoda
MLT
70
4
0
27 Jan 2023
The Power of Linear Combinations: Learning with Random Convolutions
Paul Gavrikov
J. Keuper
80
2
0
26 Jan 2023
Break It Down: Evidence for Structural Compositionality in Neural Networks
Michael A. Lepori
Thomas Serre
Ellie Pavlick
95
37
0
26 Jan 2023
Pruning Before Training May Improve Generalization, Provably
Hongru Yang
Yingbin Liang
Xiaojie Guo
Lingfei Wu
Zhangyang Wang
MLT
58
2
0
01 Jan 2023
COLT: Cyclic Overlapping Lottery Tickets for Faster Pruning of Convolutional Neural Networks
Md. Ismail Hossain
Mohammed Rakib
M. M. L. Elahi
Nabeel Mohammed
Shafin Rahman
143
1
0
24 Dec 2022
Hyperspherical Quantization: Toward Smaller and More Accurate Models
Dan Liu
X. Chen
Chen Ma
Xue Liu
MQ
63
3
0
24 Dec 2022
Pruning On-the-Fly: A Recoverable Pruning Method without Fine-tuning
Danyang Liu
Xue Liu
66
0
0
24 Dec 2022
Lifelong Reinforcement Learning with Modulating Masks
Eseoghene Ben-Iwhiwhu
Saptarshi Nath
Praveen K. Pilly
Soheil Kolouri
Andrea Soltoggio
CLL
OffRL
89
23
0
21 Dec 2022
AP: Selective Activation for De-sparsifying Pruned Neural Networks
Shiyu Liu
Rohan Ghosh
Dylan Tan
Mehul Motani
AAML
55
0
0
09 Dec 2022
Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks
Shiyu Liu
Rohan Ghosh
John Tan Chong Min
Mehul Motani
77
0
0
09 Dec 2022
Are Straight-Through gradients and Soft-Thresholding all you need for Sparse Training?
A. Vanderschueren
Christophe De Vleeschouwer
MQ
56
9
0
02 Dec 2022
You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets
Tianjin Huang
Tianlong Chen
Meng Fang
Vlado Menkovski
Jiaxu Zhao
...
Yulong Pei
Decebal Constantin Mocanu
Zhangyang Wang
Mykola Pechenizkiy
Shiwei Liu
GNN
80
14
0
28 Nov 2022
Exploiting the Partly Scratch-off Lottery Ticket for Quantization-Aware Training
Mingliang Xu
Gongrui Nan
Yuxin Zhang
Chia-Wen Lin
Rongrong Ji
MQ
59
3
0
12 Nov 2022
An Adversarial Robustness Perspective on the Topology of Neural Networks
Morgane Goibert
Thomas Ricatte
Elvis Dohmatob
AAML
66
2
0
04 Nov 2022
Data Level Lottery Ticket Hypothesis for Vision Transformers
Xuan Shen
Zhenglun Kong
Minghai Qin
Peiyan Dong
Geng Yuan
Xin Meng
Hao Tang
Xiaolong Ma
Yanzhi Wang
87
6
0
02 Nov 2022
Learning Neural Implicit Representations with Surface Signal Parameterizations
Yanran Guan
Andrei Chubarau
Ruby Rao
Derek Nowrouzezahrai
AI4CE
61
4
0
01 Nov 2022
Strong Lottery Ticket Hypothesis with
ε
\varepsilon
ε
--perturbation
Zheyang Xiong
Fangshuo Liao
Anastasios Kyrillidis
61
2
0
29 Oct 2022
LOFT: Finding Lottery Tickets through Filter-wise Training
Qihan Wang
Chen Dun
Fangshuo Liao
C. Jermaine
Anastasios Kyrillidis
69
3
0
28 Oct 2022
Gradient-based Weight Density Balancing for Robust Dynamic Sparse Training
Mathias Parger
Alexander Ertl
Paul Eibensteiner
J. H. Mueller
Martin Winter
M. Steinberger
52
0
0
25 Oct 2022
Exclusive Supermask Subnetwork Training for Continual Learning
Prateek Yadav
Joey Tianyi Zhou
CLL
89
6
0
18 Oct 2022
AttTrack: Online Deep Attention Transfer for Multi-object Tracking
Keivan Nalaie
Rong Zheng
VOT
65
5
0
16 Oct 2022
Parameter-Efficient Masking Networks
Yue Bai
Huan Wang
Xu Ma
Yitian Zhang
Zhiqiang Tao
Yun Fu
67
10
0
13 Oct 2022
Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask?
Mansheej Paul
F. Chen
Brett W. Larsen
Jonathan Frankle
Surya Ganguli
Gintare Karolina Dziugaite
UQCV
115
38
0
06 Oct 2022
Why Random Pruning Is All We Need to Start Sparse
Advait Gadhikar
Sohom Mukherjee
R. Burkholz
96
21
0
05 Oct 2022
ImpressLearn: Continual Learning via Combined Task Impressions
Dhrupad Bhardwaj
Julia Kempe
Artem Vysogorets
Angel Teng
Evaristus C. Ezekwem
VLM
CLL
26
0
0
05 Oct 2022
Sparse Random Networks for Communication-Efficient Federated Learning
Berivan Isik
Francesco Pase
Deniz Gunduz
Tsachy Weissman
M. Zorzi
FedML
120
53
0
30 Sep 2022
On the Soft-Subnetwork for Few-shot Class Incremental Learning
Haeyong Kang
Jaehong Yoon
Sultan Rizky Hikmawan Madjid
Sung Ju Hwang
Chang D. Yoo
CLL
114
44
0
15 Sep 2022
One-shot Network Pruning at Initialization with Discriminative Image Patches
Yinan Yang
Yu Wang
Yi Ji
Heng Qi
Jien Kato
VLM
92
4
0
13 Sep 2022
Learning sparse auto-encoders for green AI image coding
Cyprien Gille
F. Guyard
Marc Antonini
Michel Barlaud
46
3
0
09 Sep 2022
The Role Of Biology In Deep Learning
Robert Bain
58
0
0
07 Sep 2022
Improving the Cross-Lingual Generalisation in Visual Question Answering
Farhad Nooralahzadeh
Rico Sennrich
86
6
0
07 Sep 2022
Lottery Pools: Winning More by Interpolating Tickets without Increasing Training or Inference Cost
Lu Yin
Shiwei Liu
Fang Meng
Tianjin Huang
Vlado Menkovski
Mykola Pechenizkiy
54
13
0
23 Aug 2022
Semi-supervised classification using a supervised autoencoder for biomedical applications
Cyprien Gille
F. Guyard
Michel Barlaud
46
3
0
22 Aug 2022
Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural Networks
Chuang Liu
Xueqi Ma
Yinbing Zhan
Liang Ding
Dapeng Tao
Di Lin
Wenbin Hu
Danilo Mandic
79
32
0
18 Jul 2022
Improving Deep Neural Network Random Initialization Through Neuronal Rewiring
Leonardo F. S. Scabini
B. De Baets
Odemir M. Bruno
AI4CE
76
7
0
17 Jul 2022
Previous
1
2
3
4
5
Next