ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.07990
  4. Cited By
Optimal Lottery Tickets via SubsetSum: Logarithmic Over-Parameterization
  is Sufficient

Optimal Lottery Tickets via SubsetSum: Logarithmic Over-Parameterization is Sufficient

14 June 2020
Ankit Pensia
Shashank Rajput
Alliot Nagle
Harit Vishwakarma
Dimitris Papailiopoulos
ArXivPDFHTML

Papers citing "Optimal Lottery Tickets via SubsetSum: Logarithmic Over-Parameterization is Sufficient"

45 / 95 papers shown
Title
i-SpaSP: Structured Neural Pruning via Sparse Signal Recovery
i-SpaSP: Structured Neural Pruning via Sparse Signal Recovery
Cameron R. Wolfe
Anastasios Kyrillidis
48
1
0
07 Dec 2021
Pixelated Butterfly: Simple and Efficient Sparse training for Neural
  Network Models
Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models
Tri Dao
Beidi Chen
Kaizhao Liang
Jiaming Yang
Zhao Song
Atri Rudra
Christopher Ré
94
79
0
30 Nov 2021
Plant ñ' Seek: Can You Find the Winning Ticket?
Plant ñ' Seek: Can You Find the Winning Ticket?
Jonas Fischer
R. Burkholz
63
21
0
22 Nov 2021
On the Existence of Universal Lottery Tickets
On the Existence of Universal Lottery Tickets
R. Burkholz
Nilanjana Laha
Rajarshi Mukherjee
Alkis Gotovos
UQCV
63
32
0
22 Nov 2021
Lottery Tickets with Nonzero Biases
Lottery Tickets with Nonzero Biases
Jonas Fischer
Advait Gadhikar
R. Burkholz
49
6
0
21 Oct 2021
Finding Everything within Random Binary Networks
Finding Everything within Random Binary Networks
Kartik K. Sreenivasan
Shashank Rajput
Jy-yong Sohn
Dimitris Papailiopoulos
32
10
0
18 Oct 2021
Sparse Progressive Distillation: Resolving Overfitting under
  Pretrain-and-Finetune Paradigm
Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm
Shaoyi Huang
Dongkuan Xu
Ian En-Hsu Yen
Yijue Wang
Sung-En Chang
...
Shiyang Chen
Mimi Xie
Sanguthevar Rajasekaran
Hang Liu
Caiwen Ding
CLL
VLM
42
32
0
15 Oct 2021
How much pre-training is enough to discover a good subnetwork?
How much pre-training is enough to discover a good subnetwork?
Cameron R. Wolfe
Fangshuo Liao
Qihan Wang
Junhyung Lyle Kim
Anastasios Kyrillidis
47
3
0
31 Jul 2021
Pruning Randomly Initialized Neural Networks with Iterative
  Randomization
Pruning Randomly Initialized Neural Networks with Iterative Randomization
Daiki Chijiwa
Shin'ya Yamaguchi
Yasutoshi Ida
Kenji Umakoshi
T. Inoue
45
25
0
17 Jun 2021
GANs Can Play Lottery Tickets Too
GANs Can Play Lottery Tickets Too
Xuxi Chen
Zhenyu Zhang
Yongduo Sui
Tianlong Chen
GAN
57
58
0
31 May 2021
A Probabilistic Approach to Neural Network Pruning
A Probabilistic Approach to Neural Network Pruning
Xin-Yao Qian
Diego Klabjan
88
17
0
20 May 2021
Playing Lottery Tickets with Vision and Language
Playing Lottery Tickets with Vision and Language
Zhe Gan
Yen-Chun Chen
Linjie Li
Tianlong Chen
Yu Cheng
Shuohang Wang
Jingjing Liu
Lijuan Wang
Zicheng Liu
VLM
138
55
0
23 Apr 2021
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural
  Networks by Pruning A Randomly Weighted Network
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network
James Diffenderfer
B. Kailkhura
MQ
60
75
0
17 Mar 2021
Recent Advances on Neural Network Pruning at Initialization
Recent Advances on Neural Network Pruning at Initialization
Huan Wang
Can Qin
Yue Bai
Yulun Zhang
Yun Fu
CVBM
71
66
0
11 Mar 2021
MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks
MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks
Alexandre Ramé
Rémy Sun
Matthieu Cord
UQCV
57
60
0
10 Mar 2021
Slot Machines: Discovering Winning Combinations of Random Weights in
  Neural Networks
Slot Machines: Discovering Winning Combinations of Random Weights in Neural Networks
Maxwell Mbabilla Aladago
Lorenzo Torresani
35
10
0
16 Jan 2021
Provable Benefits of Overparameterization in Model Compression: From
  Double Descent to Pruning Neural Networks
Provable Benefits of Overparameterization in Model Compression: From Double Descent to Pruning Neural Networks
Xiangyu Chang
Yingcong Li
Samet Oymak
Christos Thrampoulidis
61
50
0
16 Dec 2020
Greedy Optimization Provably Wins the Lottery: Logarithmic Number of
  Winning Tickets is Enough
Greedy Optimization Provably Wins the Lottery: Logarithmic Number of Winning Tickets is Enough
Mao Ye
Lemeng Wu
Qiang Liu
42
17
0
29 Oct 2020
A Gradient Flow Framework For Analyzing Network Pruning
A Gradient Flow Framework For Analyzing Network Pruning
Ekdeep Singh Lubana
Robert P. Dick
57
52
0
24 Sep 2020
Lottery Tickets in Linear Models: An Analysis of Iterative Magnitude
  Pruning
Lottery Tickets in Linear Models: An Analysis of Iterative Magnitude Pruning
Bryn Elesedy
Varun Kanade
Yee Whye Teh
43
30
0
16 Jul 2020
Logarithmic Pruning is All You Need
Logarithmic Pruning is All You Need
Laurent Orseau
Marcus Hutter
Omar Rivasplata
58
88
0
22 Jun 2020
On the Transferability of Winning Tickets in Non-Natural Image Datasets
On the Transferability of Winning Tickets in Non-Natural Image Datasets
M. Sabatelli
M. Kestemont
Pierre Geurts
26
15
0
11 May 2020
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
267
1,052
0
06 Mar 2020
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
104
275
0
03 Feb 2020
Linear Mode Connectivity and the Lottery Ticket Hypothesis
Linear Mode Connectivity and the Lottery Ticket Hypothesis
Jonathan Frankle
Gintare Karolina Dziugaite
Daniel M. Roy
Michael Carbin
MoMe
149
619
0
11 Dec 2019
The Search for Sparse, Robust Neural Networks
The Search for Sparse, Robust Neural Networks
J. Cosentino
Federico Zaiter
Dan Pei
Jun Zhu
AAML
OOD
28
18
0
05 Dec 2019
What's Hidden in a Randomly Weighted Neural Network?
What's Hidden in a Randomly Weighted Neural Network?
Vivek Ramanujan
Mitchell Wortsman
Aniruddha Kembhavi
Ali Farhadi
Mohammad Rastegari
66
357
0
29 Nov 2019
Pruning from Scratch
Pruning from Scratch
Yulong Wang
Xiaolu Zhang
Lingxi Xie
Jun Zhou
Hang Su
Bo Zhang
Xiaolin Hu
58
194
0
27 Sep 2019
Energy and Policy Considerations for Deep Learning in NLP
Energy and Policy Considerations for Deep Learning in NLP
Emma Strubell
Ananya Ganesh
Andrew McCallum
69
2,657
0
05 Jun 2019
Exploring Structural Sparsity of Deep Networks via Inverse Scale Spaces
Exploring Structural Sparsity of Deep Networks via Inverse Scale Spaces
Yanwei Fu
Chen Liu
Donghao Li
Zuyuan Zhong
Xinwei Sun
Jinshan Zeng
Yuan Yao
35
10
0
23 May 2019
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
Hattie Zhou
Janice Lan
Rosanne Liu
J. Yosinski
UQCV
52
387
0
03 May 2019
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
233
3,473
0
09 Mar 2018
AMC: AutoML for Model Compression and Acceleration on Mobile Devices
AMC: AutoML for Model Compression and Acceleration on Mobile Devices
Yihui He
Ji Lin
Zhijian Liu
Hanrui Wang
Li Li
Song Han
95
1,347
0
10 Feb 2018
A Survey of Model Compression and Acceleration for Deep Neural Networks
A Survey of Model Compression and Acceleration for Deep Neural Networks
Yu Cheng
Duo Wang
Pan Zhou
Zhang Tao
72
1,095
0
23 Oct 2017
To prune, or not to prune: exploring the efficacy of pruning for model
  compression
To prune, or not to prune: exploring the efficacy of pruning for model compression
Michael Zhu
Suyog Gupta
194
1,276
0
05 Oct 2017
Channel Pruning for Accelerating Very Deep Neural Networks
Channel Pruning for Accelerating Very Deep Neural Networks
Yihui He
Xiangyu Zhang
Jian Sun
201
2,525
0
19 Jul 2017
Trained Ternary Quantization
Trained Ternary Quantization
Chenzhuo Zhu
Song Han
Huizi Mao
W. Dally
MQ
137
1,035
0
04 Dec 2016
Quantized Neural Networks: Training Neural Networks with Low Precision
  Weights and Activations
Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
Itay Hubara
Matthieu Courbariaux
Daniel Soudry
Ran El-Yaniv
Yoshua Bengio
MQ
149
1,866
0
22 Sep 2016
Pruning Filters for Efficient ConvNets
Pruning Filters for Efficient ConvNets
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
3DPC
193
3,697
0
31 Aug 2016
SGDR: Stochastic Gradient Descent with Warm Restarts
SGDR: Stochastic Gradient Descent with Warm Restarts
I. Loshchilov
Frank Hutter
ODL
333
8,130
0
13 Aug 2016
Learning Structured Sparsity in Deep Neural Networks
Learning Structured Sparsity in Deep Neural Networks
W. Wen
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
178
2,339
0
12 Aug 2016
Quantized Convolutional Neural Networks for Mobile Devices
Quantized Convolutional Neural Networks for Mobile Devices
Jiaxiang Wu
Cong Leng
Yuhang Wang
Qinghao Hu
Jian Cheng
MQ
85
1,166
0
21 Dec 2015
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
  Quantization and Huffman Coding
Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding
Song Han
Huizi Mao
W. Dally
3DGS
259
8,842
0
01 Oct 2015
Learning both Weights and Connections for Efficient Neural Networks
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
313
6,681
0
08 Jun 2015
Delving Deep into Rectifiers: Surpassing Human-Level Performance on
  ImageNet Classification
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
VLM
326
18,625
0
06 Feb 2015
Previous
12