ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.00585
  4. Cited By
Proving the Lottery Ticket Hypothesis: Pruning is All You Need

Proving the Lottery Ticket Hypothesis: Pruning is All You Need

3 February 2020
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
ArXiv (abs)PDFHTML

Papers citing "Proving the Lottery Ticket Hypothesis: Pruning is All You Need"

50 / 182 papers shown
Title
FAM: fast adaptive federated meta-learning
FAM: fast adaptive federated meta-learning
Indrajeet Kumar Sinha
Shekhar Verma
Krishna Pratap Singh
FedML
70
0
0
26 Aug 2023
An Intentional Forgetting-Driven Self-Healing Method For Deep
  Reinforcement Learning Systems
An Intentional Forgetting-Driven Self-Healing Method For Deep Reinforcement Learning Systems
Ahmed Haj Yahmed
Rached Bouchoucha
Houssem Ben Braiek
Foutse Khomh
CLLAI4CE
59
0
0
23 Aug 2023
The Snowflake Hypothesis: Training Deep GNN with One Node One Receptive
  field
The Snowflake Hypothesis: Training Deep GNN with One Node One Receptive field
Kun Wang
Guohao Li
Shilong Wang
Guibin Zhang
Kaidi Wang
Yang You
Xiaojiang Peng
Yuxuan Liang
Yang Wang
68
9
0
19 Aug 2023
Sparse Binary Transformers for Multivariate Time Series Modeling
Sparse Binary Transformers for Multivariate Time Series Modeling
Matt Gorbett
Hossein Shirazi
I. Ray
AI4TS
88
14
0
09 Aug 2023
Quantifying lottery tickets under label noise: accuracy, calibration,
  and complexity
Quantifying lottery tickets under label noise: accuracy, calibration, and complexity
V. Arora
Daniele Irto
Sebastian Goldt
G. Sanguinetti
88
2
0
21 Jun 2023
Representation and decomposition of functions in DAG-DNNs and structural
  network pruning
Representation and decomposition of functions in DAG-DNNs and structural network pruning
Wonjun Hwang
52
1
0
16 Jun 2023
Implicit Compressibility of Overparametrized Neural Networks Trained
  with Heavy-Tailed SGD
Implicit Compressibility of Overparametrized Neural Networks Trained with Heavy-Tailed SGD
Yijun Wan
Melih Barsbey
Milad Sefidgaran
Umut Simsekli
68
1
0
13 Jun 2023
Biologically-Motivated Learning Model for Instructed Visual Processing
Biologically-Motivated Learning Model for Instructed Visual Processing
R. Abel
S. Ullman
60
0
0
04 Jun 2023
Generalization Bounds for Magnitude-Based Pruning via Sparse Matrix
  Sketching
Generalization Bounds for Magnitude-Based Pruning via Sparse Matrix Sketching
E. Guha
Prasanjit Dubey
X. Huo
MLT
56
1
0
30 May 2023
Evolving Connectivity for Recurrent Spiking Neural Networks
Evolving Connectivity for Recurrent Spiking Neural Networks
Guan-Bo Wang
Yuhao Sun
Sijie Cheng
Sen Song
51
5
0
28 May 2023
Pruning at Initialization -- A Sketching Perspective
Pruning at Initialization -- A Sketching Perspective
Noga Bar
Raja Giryes
112
1
0
27 May 2023
Understanding Sparse Neural Networks from their Topology via
  Multipartite Graph Representations
Understanding Sparse Neural Networks from their Topology via Multipartite Graph Representations
Elia Cunegatti
Matteo Farina
Doina Bucur
Giovanni Iacca
85
1
0
26 May 2023
Learning to Act through Evolution of Neural Diversity in Random Neural
  Networks
Learning to Act through Evolution of Neural Diversity in Random Neural Networks
J. Pedersen
S. Risi
53
2
0
25 May 2023
Probabilistic Modeling: Proving the Lottery Ticket Hypothesis in Spiking
  Neural Network
Probabilistic Modeling: Proving the Lottery Ticket Hypothesis in Spiking Neural Network
Man Yao
Yu-Liang Chou
Guangshe Zhao
Xiawu Zheng
Yonghong Tian
Boxing Xu
Guoqi Li
68
4
0
20 May 2023
Rethinking Graph Lottery Tickets: Graph Sparsity Matters
Rethinking Graph Lottery Tickets: Graph Sparsity Matters
Bo Hui
Jocelyn M Mora
Adrian Dalca
I. Aganj
110
24
0
03 May 2023
Randomly Initialized Subnetworks with Iterative Weight Recycling
Randomly Initialized Subnetworks with Iterative Weight Recycling
Matt Gorbett
L. D. Whitley
68
4
0
28 Mar 2023
ExplainFix: Explainable Spatially Fixed Deep Networks
ExplainFix: Explainable Spatially Fixed Deep Networks
Alex Gaudio
Christos Faloutsos
A. Smailagic
P. Costa
A. Campilho
FAtt
65
3
0
18 Mar 2023
DSD$^2$: Can We Dodge Sparse Double Descent and Compress the Neural
  Network Worry-Free?
DSD2^22: Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free?
Victor Quétu
Enzo Tartaglione
85
7
0
02 Mar 2023
Considering Layerwise Importance in the Lottery Ticket Hypothesis
Considering Layerwise Importance in the Lottery Ticket Hypothesis
Benjamin Vandersmissen
José Oramas
62
1
0
22 Feb 2023
Workload-Balanced Pruning for Sparse Spiking Neural Networks
Workload-Balanced Pruning for Sparse Spiking Neural Networks
Ruokai Yin
Youngeun Kim
Yuhang Li
Abhishek Moitra
Nitin Satpute
Anna Hambitzer
Priyadarshini Panda
93
21
0
13 Feb 2023
Quantum Neuron Selection: Finding High Performing Subnetworks With
  Quantum Algorithms
Quantum Neuron Selection: Finding High Performing Subnetworks With Quantum Algorithms
Tim Whitaker
62
1
0
12 Feb 2023
Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural
  Networks
Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural Networks
Shuai Zhang
Ming Wang
Pin-Yu Chen
Sijia Liu
Songtao Lu
Miaoyuan Liu
MLT
108
17
0
06 Feb 2023
Quantum Ridgelet Transform: Winning Lottery Ticket of Neural Networks
  with Quantum Computation
Quantum Ridgelet Transform: Winning Lottery Ticket of Neural Networks with Quantum Computation
H. Yamasaki
Sathyawageeswar Subramanian
Satoshi Hayakawa
Sho Sonoda
MLT
70
4
0
27 Jan 2023
Voting from Nearest Tasks: Meta-Vote Pruning of Pre-trained Models for
  Downstream Tasks
Voting from Nearest Tasks: Meta-Vote Pruning of Pre-trained Models for Downstream Tasks
Haiyan Zhao
Tianyi Zhou
Guodong Long
Jing Jiang
Chengqi Zhang
63
0
0
27 Jan 2023
Pruning Before Training May Improve Generalization, Provably
Pruning Before Training May Improve Generalization, Provably
Hongru Yang
Yingbin Liang
Xiaojie Guo
Lingfei Wu
Zhangyang Wang
MLT
58
2
0
01 Jan 2023
Publishing Efficient On-device Models Increases Adversarial
  Vulnerability
Publishing Efficient On-device Models Increases Adversarial Vulnerability
Sanghyun Hong
Nicholas Carlini
Alexey Kurakin
AAML
70
3
0
28 Dec 2022
AP: Selective Activation for De-sparsifying Pruned Neural Networks
AP: Selective Activation for De-sparsifying Pruned Neural Networks
Shiyu Liu
Rohan Ghosh
Dylan Tan
Mehul Motani
AAML
55
0
0
09 Dec 2022
Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural
  Networks
Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks
Shiyu Liu
Rohan Ghosh
John Tan Chong Min
Mehul Motani
77
0
0
09 Dec 2022
LU decomposition and Toeplitz decomposition of a neural network
LU decomposition and Toeplitz decomposition of a neural network
Yucong Liu
Simiao Jiao
Lek-Heng Lim
44
7
0
25 Nov 2022
Finding Skill Neurons in Pre-trained Transformer-based Language Models
Finding Skill Neurons in Pre-trained Transformer-based Language Models
Xiaozhi Wang
Kaiyue Wen
Zhengyan Zhang
Lei Hou
Zhiyuan Liu
Juanzi Li
MILMMoE
88
52
0
14 Nov 2022
Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing
Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing
Yonggan Fu
Yang Zhang
Kaizhi Qian
Zhifan Ye
Zhongzhi Yu
Cheng-I Jeff Lai
Yingyan Lin
168
9
0
02 Nov 2022
Strong Lottery Ticket Hypothesis with $\varepsilon$--perturbation
Strong Lottery Ticket Hypothesis with ε\varepsilonε--perturbation
Zheyang Xiong
Fangshuo Liao
Anastasios Kyrillidis
61
2
0
29 Oct 2022
LOFT: Finding Lottery Tickets through Filter-wise Training
LOFT: Finding Lottery Tickets through Filter-wise Training
Qihan Wang
Chen Dun
Fangshuo Liao
C. Jermaine
Anastasios Kyrillidis
69
3
0
28 Oct 2022
Approximating Continuous Convolutions for Deep Network Compression
Approximating Continuous Convolutions for Deep Network Compression
Theo W. Costain
V. Prisacariu
66
0
0
17 Oct 2022
Parameter-Efficient Masking Networks
Parameter-Efficient Masking Networks
Yue Bai
Huan Wang
Xu Ma
Yitian Zhang
Zhiqiang Tao
Yun Fu
67
10
0
13 Oct 2022
Why Random Pruning Is All We Need to Start Sparse
Why Random Pruning Is All We Need to Start Sparse
Advait Gadhikar
Sohom Mukherjee
R. Burkholz
96
21
0
05 Oct 2022
Neural Network Panning: Screening the Optimal Sparse Network Before
  Training
Neural Network Panning: Screening the Optimal Sparse Network Before Training
Xiatao Kang
P. Li
Jiayi Yao
Chengxi Li
VLM
45
1
0
27 Sep 2022
Random Fourier Features for Asymmetric Kernels
Random Fourier Features for Asymmetric Kernels
Ming-qian He
Fan He
Fanghui Liu
Xiaolin Huang
64
3
0
18 Sep 2022
Robustness in deep learning: The good (width), the bad (depth), and the
  ugly (initialization)
Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization)
Zhenyu Zhu
Fanghui Liu
Grigorios G. Chrysos
Volkan Cevher
104
21
0
15 Sep 2022
Generalization Properties of NAS under Activation and Skip Connection
  Search
Generalization Properties of NAS under Activation and Skip Connection Search
Zhenyu Zhu
Fanghui Liu
Grigorios G. Chrysos
Volkan Cevher
AI4CE
90
17
0
15 Sep 2022
One-shot Network Pruning at Initialization with Discriminative Image
  Patches
One-shot Network Pruning at Initialization with Discriminative Image Patches
Yinan Yang
Yu Wang
Yi Ji
Heng Qi
Jien Kato
VLM
92
4
0
13 Sep 2022
Controlled Sparsity via Constrained Optimization or: How I Learned to
  Stop Tuning Penalties and Love Constraints
Controlled Sparsity via Constrained Optimization or: How I Learned to Stop Tuning Penalties and Love Constraints
Jose Gallego-Posada
Juan Ramirez
Akram Erraqabi
Yoshua Bengio
Simon Lacoste-Julien
152
22
0
08 Aug 2022
To update or not to update? Neurons at equilibrium in deep models
To update or not to update? Neurons at equilibrium in deep models
Andrea Bragagnolo
Enzo Tartaglione
Marco Grangetto
77
11
0
19 Jul 2022
The Lottery Ticket Hypothesis for Self-attention in Convolutional Neural
  Network
The Lottery Ticket Hypothesis for Self-attention in Convolutional Neural Network
Zhongzhan Huang
Senwei Liang
Mingfu Liang
Wei He
Haizhao Yang
Liang Lin
72
9
0
16 Jul 2022
PRANC: Pseudo RAndom Networks for Compacting deep models
PRANC: Pseudo RAndom Networks for Compacting deep models
Parsa Nooralinejad
Ali Abbasi
Soroush Abbasi Koohpayegani
Kossar Pourahmadi Meibodi
Rana Muhammad Shahroz Khan
Soheil Kolouri
Hamed Pirsiavash
DD
99
0
0
16 Jun 2022
Embarrassingly Parallel Independent Training of Multi-Layer Perceptrons
  with Heterogeneous Architectures
Embarrassingly Parallel Independent Training of Multi-Layer Perceptrons with Heterogeneous Architectures
F. Farias
Teresa B Ludermir
C. B. Filho
44
2
0
14 Jun 2022
PAC-Net: A Model Pruning Approach to Inductive Transfer Learning
PAC-Net: A Model Pruning Approach to Inductive Transfer Learning
Sanghoon Myung
I. Huh
Wonik Jang
Jae Myung Choe
Jisu Ryu
Daesin Kim
Kee-Eung Kim
C. Jeong
60
13
0
12 Jun 2022
A Theoretical Understanding of Neural Network Compression from Sparse
  Linear Approximation
A Theoretical Understanding of Neural Network Compression from Sparse Linear Approximation
Wenjing Yang
G. Wang
Jie Ding
Yuhong Yang
MLT
67
7
0
11 Jun 2022
A General Framework For Proving The Equivariant Strong Lottery Ticket
  Hypothesis
A General Framework For Proving The Equivariant Strong Lottery Ticket Hypothesis
Damien Ferbach
Christos Tsirigotis
Gauthier Gidel
Avishek
A. Bose
78
17
0
09 Jun 2022
Meta-ticket: Finding optimal subnetworks for few-shot learning within
  randomly initialized neural networks
Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks
Daiki Chijiwa
Shin'ya Yamaguchi
Atsutoshi Kumagai
Yasutoshi Ida
80
9
0
31 May 2022
Previous
1234
Next