Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2106.03225
Cited By
Efficient Lottery Ticket Finding: Less Data is More
6 June 2021
Zhenyu (Allen) Zhang
Xuxi Chen
Tianlong Chen
Zhangyang Wang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Efficient Lottery Ticket Finding: Less Data is More"
21 / 21 papers shown
Title
PERP: Rethinking the Prune-Retrain Paradigm in the Era of LLMs
Max Zimmer
Megi Andoni
Christoph Spiegel
Sebastian Pokutta
VLM
52
10
0
23 Dec 2023
DSD
2
^2
2
: Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free?
Victor Quétu
Enzo Tartaglione
32
7
0
02 Mar 2023
Dynamic Sparse Network for Time Series Classification: Learning What to "see''
Qiao Xiao
Boqian Wu
Yu Zhang
Shiwei Liu
Mykola Pechenizkiy
Elena Mocanu
Decebal Constantin Mocanu
AI4TS
38
28
0
19 Dec 2022
Speeding up NAS with Adaptive Subset Selection
Vishak Prasad
Colin White
P. Jain
Sibasis Nayak
Ganesh Ramakrishnan
BDL
21
5
0
02 Nov 2022
Gradient-based Weight Density Balancing for Robust Dynamic Sparse Training
Mathias Parger
Alexander Ertl
Paul Eibensteiner
J. H. Mueller
Martin Winter
M. Steinberger
34
0
0
25 Oct 2022
Advancing Model Pruning via Bi-level Optimization
Yihua Zhang
Yuguang Yao
Parikshit Ram
Pu Zhao
Tianlong Chen
Min-Fong Hong
Yanzhi Wang
Sijia Liu
49
68
0
08 Oct 2022
FakeCLR: Exploring Contrastive Learning for Solving Latent Discontinuity in Data-Efficient GANs
Ziqiang Li
Chaoyue Wang
Heliang Zheng
Jing Zhang
Bin Li
31
24
0
18 Jul 2022
Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural Networks
Chuang Liu
Xueqi Ma
Yinbing Zhan
Liang Ding
Dapeng Tao
Bo Du
Wenbin Hu
Danilo Mandic
42
28
0
18 Jul 2022
Exploring Lottery Ticket Hypothesis in Spiking Neural Networks
Youngeun Kim
Yuhang Li
Hyoungseob Park
Yeshwanth Venkatesha
Ruokai Yin
Priyadarshini Panda
27
46
0
04 Jul 2022
Convolutional and Residual Networks Provably Contain Lottery Tickets
R. Burkholz
UQCV
MLT
37
13
0
04 May 2022
Most Activation Functions Can Win the Lottery Without Excessive Depth
R. Burkholz
MLT
74
18
0
04 May 2022
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks
Xin Yu
Thiago Serra
Srikumar Ramalingam
Shandian Zhe
42
48
0
09 Mar 2022
The rise of the lottery heroes: why zero-shot pruning is hard
Enzo Tartaglione
29
6
0
24 Feb 2022
Sparsity Winning Twice: Better Robust Generalization from More Efficient Training
Tianlong Chen
Zhenyu (Allen) Zhang
Pengju Wang
Santosh Balachandra
Haoyu Ma
Zehao Wang
Zhangyang Wang
OOD
AAML
85
47
0
20 Feb 2022
SHRIMP: Sparser Random Feature Models via Iterative Magnitude Pruning
Yuege Xie
Bobby Shi
Hayden Schaeffer
Rachel A. Ward
81
9
0
07 Dec 2021
Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?
Xiaolong Ma
Geng Yuan
Xuan Shen
Tianlong Chen
Xuxi Chen
...
Ning Liu
Minghai Qin
Sijia Liu
Zhangyang Wang
Yanzhi Wang
19
63
0
01 Jul 2021
Playing Lottery Tickets with Vision and Language
Zhe Gan
Yen-Chun Chen
Linjie Li
Tianlong Chen
Yu Cheng
Shuohang Wang
Jingjing Liu
Lijuan Wang
Zicheng Liu
VLM
106
54
0
23 Apr 2021
Recent Advances on Neural Network Pruning at Initialization
Huan Wang
Can Qin
Yue Bai
Yulun Zhang
Yun Fu
CVBM
33
64
0
11 Mar 2021
SCOP: Scientific Control for Reliable Neural Network Pruning
Yehui Tang
Yunhe Wang
Yixing Xu
Dacheng Tao
Chunjing Xu
Chao Xu
Chang Xu
AAML
50
166
0
21 Oct 2020
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
Tianlong Chen
Jonathan Frankle
Shiyu Chang
Sijia Liu
Yang Zhang
Zhangyang Wang
Michael Carbin
156
377
0
23 Jul 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
224
383
0
05 Mar 2020
1