ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.00585
  4. Cited By
Proving the Lottery Ticket Hypothesis: Pruning is All You Need

Proving the Lottery Ticket Hypothesis: Pruning is All You Need

3 February 2020
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
ArXiv (abs)PDFHTML

Papers citing "Proving the Lottery Ticket Hypothesis: Pruning is All You Need"

50 / 182 papers shown
Title
Towards efficient feature sharing in MIMO architectures
Towards efficient feature sharing in MIMO architectures
Rémy Sun
Alexandre Ramé
Clément Masson
Nicolas Thome
Matthieu Cord
129
6
0
20 May 2022
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep
  Neural Network, a Survey
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a Survey
Paul Wimmer
Jens Mehnert
Alexandru Paul Condurache
DD
91
21
0
17 May 2022
Analyzing Lottery Ticket Hypothesis from PAC-Bayesian Theory Perspective
Analyzing Lottery Ticket Hypothesis from PAC-Bayesian Theory Perspective
Keitaro Sakamoto
Issei Sato
83
9
0
15 May 2022
Convolutional and Residual Networks Provably Contain Lottery Tickets
Convolutional and Residual Networks Provably Contain Lottery Tickets
R. Burkholz
UQCVMLT
77
13
0
04 May 2022
Most Activation Functions Can Win the Lottery Without Excessive Depth
Most Activation Functions Can Win the Lottery Without Excessive Depth
R. Burkholz
MLT
115
18
0
04 May 2022
MIME: Adapting a Single Neural Network for Multi-task Inference with
  Memory-efficient Dynamic Pruning
MIME: Adapting a Single Neural Network for Multi-task Inference with Memory-efficient Dynamic Pruning
Abhiroop Bhattacharjee
Yeshwanth Venkatesha
Abhishek Moitra
Priyadarshini Panda
39
6
0
11 Apr 2022
LilNetX: Lightweight Networks with EXtreme Model Compression and
  Structured Sparsification
LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification
Sharath Girish
Kamal Gupta
Saurabh Singh
Abhinav Shrivastava
98
11
0
06 Apr 2022
On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks
On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks
Hongru Yang
Zhangyang Wang
MLT
106
8
0
27 Mar 2022
Playing Lottery Tickets in Style Transfer Models
Playing Lottery Tickets in Style Transfer Models
Meihao Kong
Jing Huo
Wenbin Li
Jing Wu
Yu-kun Lai
Yang Gao
59
1
0
25 Mar 2022
Interspace Pruning: Using Adaptive Filter Representations to Improve
  Training of Sparse CNNs
Interspace Pruning: Using Adaptive Filter Representations to Improve Training of Sparse CNNs
Paul Wimmer
Jens Mehnert
Alexandru Paul Condurache
CVBM
62
20
0
15 Mar 2022
Exploiting Low-Rank Tensor-Train Deep Neural Networks Based on
  Riemannian Gradient Descent With Illustrations of Speech Processing
Exploiting Low-Rank Tensor-Train Deep Neural Networks Based on Riemannian Gradient Descent With Illustrations of Speech Processing
Jun Qi
Chao-Han Huck Yang
Pin-Yu Chen
Javier Tejedor
73
18
0
11 Mar 2022
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another
  in Neural Networks
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks
Xin Yu
Thiago Serra
Srikumar Ramalingam
Shandian Zhe
92
49
0
09 Mar 2022
Provable and Efficient Continual Representation Learning
Provable and Efficient Continual Representation Learning
Yingcong Li
Mingchen Li
M. Salman Asif
Samet Oymak
CLL
59
13
0
03 Mar 2022
Extracting Effective Subnetworks with Gumbel-Softmax
Extracting Effective Subnetworks with Gumbel-Softmax
Robin Dupont
M. Alaoui
H. Sahbi
A. Lebois
39
6
0
25 Feb 2022
The rise of the lottery heroes: why zero-shot pruning is hard
The rise of the lottery heroes: why zero-shot pruning is hard
Enzo Tartaglione
71
6
0
24 Feb 2022
Rare Gems: Finding Lottery Tickets at Initialization
Rare Gems: Finding Lottery Tickets at Initialization
Kartik K. Sreenivasan
Jy-yong Sohn
Liu Yang
Matthew Grinde
Alliot Nagle
Hongyi Wang
Eric P. Xing
Kangwook Lee
Dimitris Papailiopoulos
66
42
0
24 Feb 2022
Bit-wise Training of Neural Network Weights
Bit-wise Training of Neural Network Weights
Cristian Ivan
MQ
42
0
0
19 Feb 2022
DataMUX: Data Multiplexing for Neural Networks
DataMUX: Data Multiplexing for Neural Networks
Vishvak Murahari
Carlos E. Jimenez
Runzhe Yang
Karthik Narasimhan
MoE
60
17
0
18 Feb 2022
A Study of Designing Compact Audio-Visual Wake Word Spotting System
  Based on Iterative Fine-Tuning in Neural Network Pruning
A Study of Designing Compact Audio-Visual Wake Word Spotting System Based on Iterative Fine-Tuning in Neural Network Pruning
Hengshun Zhou
Jun Du
Chao-Han Huck Yang
Shifu Xiong
Chin-Hui Lee
VLM
48
3
0
17 Feb 2022
Evolving Neural Networks with Optimal Balance between Information Flow
  and Connections Cost
Evolving Neural Networks with Optimal Balance between Information Flow and Connections Cost
A. Khalili
A. Bouchachia
72
0
0
12 Feb 2022
On The Energy Statistics of Feature Maps in Pruning of Neural Networks
  with Skip-Connections
On The Energy Statistics of Feature Maps in Pruning of Neural Networks with Skip-Connections
Mohammadreza Soltani
Suya Wu
Yuerong Li
Jie Ding
Vahid Tarokh
3DPC
41
0
0
26 Jan 2022
Examining and Mitigating the Impact of Crossbar Non-idealities for
  Accurate Implementation of Sparse Deep Neural Networks
Examining and Mitigating the Impact of Crossbar Non-idealities for Accurate Implementation of Sparse Deep Neural Networks
Abhiroop Bhattacharjee
Lakshya Bhatnagar
Priyadarshini Panda
59
11
0
13 Jan 2022
Exploiting Hybrid Models of Tensor-Train Networks for Spoken Command
  Recognition
Exploiting Hybrid Models of Tensor-Train Networks for Spoken Command Recognition
Jun Qi
Javier Tejedor
60
4
0
11 Jan 2022
SHRIMP: Sparser Random Feature Models via Iterative Magnitude Pruning
SHRIMP: Sparser Random Feature Models via Iterative Magnitude Pruning
Yuege Xie
Bobby Shi
Hayden Schaeffer
Rachel A. Ward
125
10
0
07 Dec 2021
i-SpaSP: Structured Neural Pruning via Sparse Signal Recovery
i-SpaSP: Structured Neural Pruning via Sparse Signal Recovery
Cameron R. Wolfe
Anastasios Kyrillidis
55
1
0
07 Dec 2021
Equal Bits: Enforcing Equally Distributed Binary Network Weights
Equal Bits: Enforcing Equally Distributed Binary Network Weights
Yun-qiang Li
S. Pintea
Jan van Gemert
MQ
89
15
0
02 Dec 2021
Pixelated Butterfly: Simple and Efficient Sparse training for Neural
  Network Models
Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models
Tri Dao
Beidi Chen
Kaizhao Liang
Jiaming Yang
Zhao Song
Atri Rudra
Christopher Ré
133
79
0
30 Nov 2021
Plant ñ' Seek: Can You Find the Winning Ticket?
Plant ñ' Seek: Can You Find the Winning Ticket?
Jonas Fischer
R. Burkholz
81
21
0
22 Nov 2021
On the Existence of Universal Lottery Tickets
On the Existence of Universal Lottery Tickets
R. Burkholz
Nilanjana Laha
Rajarshi Mukherjee
Alkis Gotovos
UQCV
85
33
0
22 Nov 2021
Learning Pruned Structure and Weights Simultaneously from Scratch: an
  Attention based Approach
Learning Pruned Structure and Weights Simultaneously from Scratch: an Attention based Approach
Qisheng He
Weisong Shi
Ming Dong
56
3
0
01 Nov 2021
RGP: Neural Network Pruning through Its Regular Graph Structure
RGP: Neural Network Pruning through Its Regular Graph Structure
Zhuangzhi Chen
Jingyang Xiang
Yao Lu
Qi Xuan
Xiaoniu Yang
55
1
0
28 Oct 2021
Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks
Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks
Yonggan Fu
Qixuan Yu
Yang Zhang
Shan-Hung Wu
Ouyang Xu
David D. Cox
Yingyan Lin
AAMLOOD
123
30
0
26 Oct 2021
Lottery Tickets with Nonzero Biases
Lottery Tickets with Nonzero Biases
Jonas Fischer
Advait Gadhikar
R. Burkholz
59
6
0
21 Oct 2021
Finding Everything within Random Binary Networks
Finding Everything within Random Binary Networks
Kartik K. Sreenivasan
Shashank Rajput
Jy-yong Sohn
Dimitris Papailiopoulos
39
10
0
18 Oct 2021
S-Cyc: A Learning Rate Schedule for Iterative Pruning of ReLU-based
  Networks
S-Cyc: A Learning Rate Schedule for Iterative Pruning of ReLU-based Networks
Shiyu Liu
Chong Min John Tan
Mehul Motani
CLL
66
4
0
17 Oct 2021
Composable Sparse Fine-Tuning for Cross-Lingual Transfer
Composable Sparse Fine-Tuning for Cross-Lingual Transfer
Alan Ansell
Edoardo Ponti
Anna Korhonen
Ivan Vulić
CLLMoE
154
143
0
14 Oct 2021
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
  on Pruned Neural Networks
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks
Shuai Zhang
Meng Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
UQCVMLT
76
13
0
12 Oct 2021
Efficient Visual Recognition with Deep Neural Networks: A Survey on
  Recent Advances and New Directions
Efficient Visual Recognition with Deep Neural Networks: A Survey on Recent Advances and New Directions
Yang Wu
Dingheng Wang
Xiaotong Lu
Fan Yang
Guoqi Li
W. Dong
Jianbo Shi
104
18
0
30 Aug 2021
Membership Inference Attacks on Lottery Ticket Networks
Membership Inference Attacks on Lottery Ticket Networks
Aadesh Bagmar
Shishira R. Maiya
Shruti Bidwalka
Amol Deshpande
MIACV
92
5
0
07 Aug 2021
Spartus: A 9.4 TOp/s FPGA-based LSTM Accelerator Exploiting
  Spatio-Temporal Sparsity
Spartus: A 9.4 TOp/s FPGA-based LSTM Accelerator Exploiting Spatio-Temporal Sparsity
Chang Gao
T. Delbruck
Shih-Chii Liu
72
46
0
04 Aug 2021
How much pre-training is enough to discover a good subnetwork?
How much pre-training is enough to discover a good subnetwork?
Cameron R. Wolfe
Fangshuo Liao
Qihan Wang
Junhyung Lyle Kim
Anastasios Kyrillidis
90
3
0
31 Jul 2021
A Lottery Ticket Hypothesis Framework for Low-Complexity Device-Robust
  Neural Acoustic Scene Classification
A Lottery Ticket Hypothesis Framework for Low-Complexity Device-Robust Neural Acoustic Scene Classification
Hao Yen
Chao-Han Huck Yang
Hu Hu
Sabato Marco Siniscalchi
Qing Wang
...
Yuanjun Zhao
Yuzhong Wu
Yannan Wang
Jun Du
Chin-Hui Lee
52
17
0
03 Jul 2021
Pruning Randomly Initialized Neural Networks with Iterative
  Randomization
Pruning Randomly Initialized Neural Networks with Iterative Randomization
Daiki Chijiwa
Shin'ya Yamaguchi
Yasutoshi Ida
Kenji Umakoshi
T. Inoue
64
26
0
17 Jun 2021
A Random CNN Sees Objects: One Inductive Bias of CNN and Its
  Applications
A Random CNN Sees Objects: One Inductive Bias of CNN and Its Applications
Yun Cao
Jianxin Wu
SSL
84
27
0
17 Jun 2021
PARP: Prune, Adjust and Re-Prune for Self-Supervised Speech Recognition
PARP: Prune, Adjust and Re-Prune for Self-Supervised Speech Recognition
Cheng-I Jeff Lai
Yang Zhang
Alexander H. Liu
Shiyu Chang
Yi-Lun Liao
Yung-Sung Chuang
Kaizhi Qian
Sameer Khurana
David D. Cox
James R. Glass
VLM
162
78
0
10 Jun 2021
GANs Can Play Lottery Tickets Too
GANs Can Play Lottery Tickets Too
Xuxi Chen
Zhenyu Zhang
Yongduo Sui
Tianlong Chen
GAN
79
58
0
31 May 2021
A Probabilistic Approach to Neural Network Pruning
A Probabilistic Approach to Neural Network Pruning
Xin-Yao Qian
Diego Klabjan
91
17
0
20 May 2021
Model Pruning Based on Quantified Similarity of Feature Maps
Model Pruning Based on Quantified Similarity of Feature Maps
Zidu Wang
Xue-jun Liu
Long Huang
Yuxiang Chen
Yufei Zhang
Zhikang Lin
Rui Wang
39
18
0
13 May 2021
Playing Lottery Tickets with Vision and Language
Playing Lottery Tickets with Vision and Language
Zhe Gan
Yen-Chun Chen
Linjie Li
Tianlong Chen
Yu Cheng
Shuohang Wang
Jingjing Liu
Lijuan Wang
Zicheng Liu
VLM
142
56
0
23 Apr 2021
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural
  Networks by Pruning A Randomly Weighted Network
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network
James Diffenderfer
B. Kailkhura
MQ
97
76
0
17 Mar 2021
Previous
1234
Next