ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.03635
  4. Cited By
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
v1v2v3v4v5 (latest)

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

9 March 2018
Jonathan Frankle
Michael Carbin
ArXiv (abs)PDFHTML

Papers citing "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"

50 / 2,031 papers shown
Title
An Operator Theoretic View on Pruning Deep Neural Networks
An Operator Theoretic View on Pruning Deep Neural Networks
William T. Redman
M. Fonoberova
Ryan Mohr
Yannis G. Kevrekidis
Igor Mezić
82
17
0
28 Oct 2021
Characterizing and Taming Resolution in Convolutional Neural Networks
Characterizing and Taming Resolution in Convolutional Neural Networks
Eddie Q. Yan
Liang Luo
Luis Ceze
55
0
0
28 Oct 2021
Meta-Learning Sparse Implicit Neural Representations
Meta-Learning Sparse Implicit Neural Representations
Jaehoon Lee
Jihoon Tack
Namhoon Lee
Jinwoo Shin
98
48
0
27 Oct 2021
Learning Graph Cellular Automata
Learning Graph Cellular Automata
Daniele Grattarola
L. Livi
Cesare Alippi
GNN
67
31
0
27 Oct 2021
Diversity Enhanced Active Learning with Strictly Proper Scoring Rules
Diversity Enhanced Active Learning with Strictly Proper Scoring Rules
Wei Tan
Lan Du
Wray Buntine
66
32
0
27 Oct 2021
Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks
Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks
Yonggan Fu
Qixuan Yu
Yang Zhang
Shan-Hung Wu
Ouyang Xu
David D. Cox
Yingyan Lin
AAMLOOD
123
30
0
26 Oct 2021
MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the
  Edge
MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Geng Yuan
Xiaolong Ma
Wei Niu
Zhengang Li
Zhenglun Kong
...
Minghai Qin
Bin Ren
Yanzhi Wang
Sijia Liu
Xue Lin
97
96
0
26 Oct 2021
ZerO Initialization: Initializing Neural Networks with only Zeros and
  Ones
ZerO Initialization: Initializing Neural Networks with only Zeros and Ones
Jiawei Zhao
Florian Schäfer
Anima Anandkumar
98
26
0
25 Oct 2021
ConformalLayers: A non-linear sequential neural network with associative
  layers
ConformalLayers: A non-linear sequential neural network with associative layers
Zhen Wan
Zhuoyuan Mao
C. N. Vasconcelos
51
3
0
23 Oct 2021
When to Prune? A Policy towards Early Structural Pruning
When to Prune? A Policy towards Early Structural Pruning
Maying Shen
Pavlo Molchanov
Hongxu Yin
J. Álvarez
VLM
77
56
0
22 Oct 2021
Probabilistic fine-tuning of pruning masks and PAC-Bayes self-bounded
  learning
Probabilistic fine-tuning of pruning masks and PAC-Bayes self-bounded learning
Soufiane Hayou
Bo He
Gintare Karolina Dziugaite
55
2
0
22 Oct 2021
Conditioning of Random Feature Matrices: Double Descent and
  Generalization Error
Conditioning of Random Feature Matrices: Double Descent and Generalization Error
Zhijun Chen
Hayden Schaeffer
109
12
0
21 Oct 2021
Lottery Tickets with Nonzero Biases
Lottery Tickets with Nonzero Biases
Jonas Fischer
Advait Gadhikar
R. Burkholz
59
6
0
21 Oct 2021
HALP: Hardware-Aware Latency Pruning
HALP: Hardware-Aware Latency Pruning
Maying Shen
Hongxu Yin
Pavlo Molchanov
Lei Mao
Jianna Liu
J. Álvarez
VLM
73
14
0
20 Oct 2021
SOSP: Efficiently Capturing Global Correlations by Second-Order
  Structured Pruning
SOSP: Efficiently Capturing Global Correlations by Second-Order Structured Pruning
Manuel Nonnenmacher
Thomas Pfeil
Ingo Steinwart
David Reeb
57
27
0
19 Oct 2021
Finding Everything within Random Binary Networks
Finding Everything within Random Binary Networks
Kartik K. Sreenivasan
Shashank Rajput
Jy-yong Sohn
Dimitris Papailiopoulos
36
10
0
18 Oct 2021
S-Cyc: A Learning Rate Schedule for Iterative Pruning of ReLU-based
  Networks
S-Cyc: A Learning Rate Schedule for Iterative Pruning of ReLU-based Networks
Shiyu Liu
Chong Min John Tan
Mehul Motani
CLL
61
4
0
17 Oct 2021
GradSign: Model Performance Inference with Theoretical Insights
GradSign: Model Performance Inference with Theoretical Insights
Zhihao Zhang
Zhihao Jia
80
24
0
16 Oct 2021
A Unified Speaker Adaptation Approach for ASR
A Unified Speaker Adaptation Approach for ASR
Yingzhu Zhao
Chongjia Ni
C. Leung
Shafiq Joty
Chng Eng Siong
B. Ma
CLL
107
9
0
16 Oct 2021
Fire Together Wire Together: A Dynamic Pruning Approach with
  Self-Supervised Mask Prediction
Fire Together Wire Together: A Dynamic Pruning Approach with Self-Supervised Mask Prediction
Sara Elkerdawy
Mostafa Elhoushi
Hong Zhang
Nilanjan Ray
CVBM
84
41
0
15 Oct 2021
Sparse Progressive Distillation: Resolving Overfitting under
  Pretrain-and-Finetune Paradigm
Sparse Progressive Distillation: Resolving Overfitting under Pretrain-and-Finetune Paradigm
Shaoyi Huang
Dongkuan Xu
Ian En-Hsu Yen
Yijue Wang
Sung-En Chang
...
Shiyang Chen
Mimi Xie
Sanguthevar Rajasekaran
Hang Liu
Caiwen Ding
CLLVLM
76
32
0
15 Oct 2021
Training Deep Neural Networks with Joint Quantization and Pruning of
  Weights and Activations
Training Deep Neural Networks with Joint Quantization and Pruning of Weights and Activations
Xinyu Zhang
Ian Colbert
Ken Kreutz-Delgado
Srinjoy Das
MQ
98
12
0
15 Oct 2021
Joint Channel and Weight Pruning for Model Acceleration on Moblie
  Devices
Joint Channel and Weight Pruning for Model Acceleration on Moblie Devices
Tianli Zhao
Xi Sheryl Zhang
Wentao Zhu
Jiaxing Wang
Sen Yang
Ji Liu
Jian Cheng
79
2
0
15 Oct 2021
Composable Sparse Fine-Tuning for Cross-Lingual Transfer
Composable Sparse Fine-Tuning for Cross-Lingual Transfer
Alan Ansell
Edoardo Ponti
Anna Korhonen
Ivan Vulić
CLLMoE
154
143
0
14 Oct 2021
Plug-Tagger: A Pluggable Sequence Labeling Framework Using Language
  Models
Plug-Tagger: A Pluggable Sequence Labeling Framework Using Language Models
Xin Zhou
Ruotian Ma
Tao Gui
Y. Tan
Qi Zhang
Xuanjing Huang
VLM
68
5
0
14 Oct 2021
The Role of Permutation Invariance in Linear Mode Connectivity of Neural
  Networks
The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks
R. Entezari
Hanie Sedghi
O. Saukh
Behnam Neyshabur
MoMe
102
238
0
12 Oct 2021
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
  on Pruned Neural Networks
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks
Shuai Zhang
Meng Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
UQCVMLT
76
13
0
12 Oct 2021
A comprehensive review of Binary Neural Network
A comprehensive review of Binary Neural Network
Chunyu Yuan
S. Agaian
MQ
127
103
0
11 Oct 2021
Avoiding Forgetting and Allowing Forward Transfer in Continual Learning
  via Sparse Networks
Avoiding Forgetting and Allowing Forward Transfer in Continual Learning via Sparse Networks
Ghada Sokar
Decebal Constantin Mocanu
Mykola Pechenizkiy
CLL
94
8
0
11 Oct 2021
ProgFed: Effective, Communication, and Computation Efficient Federated
  Learning by Progressive Training
ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training
Hui-Po Wang
Sebastian U. Stich
Yang He
Mario Fritz
FedMLAI4CE
73
50
0
11 Oct 2021
Mining the Weights Knowledge for Optimizing Neural Network Structures
Mining the Weights Knowledge for Optimizing Neural Network Structures
Mengqiao Han
Xiabi Liu
Zhaoyang Hai
Xin Duan
15
1
0
11 Oct 2021
SuperShaper: Task-Agnostic Super Pre-training of BERT Models with
  Variable Hidden Dimensions
SuperShaper: Task-Agnostic Super Pre-training of BERT Models with Variable Hidden Dimensions
Vinod Ganesan
Gowtham Ramesh
Pratyush Kumar
63
9
0
10 Oct 2021
Does Preprocessing Help Training Over-parameterized Neural Networks?
Does Preprocessing Help Training Over-parameterized Neural Networks?
Zhao Song
Shuo Yang
Ruizhe Zhang
98
50
0
09 Oct 2021
Weight Evolution: Improving Deep Neural Networks Training through
  Evolving Inferior Weight Values
Weight Evolution: Improving Deep Neural Networks Training through Evolving Inferior Weight Values
Zhenquan Lin
K. Guo
Xiaofen Xing
Xiangmin Xu
ODL
49
1
0
09 Oct 2021
Performance optimizations on deep noise suppression models
Performance optimizations on deep noise suppression models
Jerry Chee
Sebastian Braun
Vishak Gopal
Ross Cutler
38
0
0
08 Oct 2021
FRL: Federated Rank Learning
FRL: Federated Rank Learning
Hamid Mozaffari
Virat Shejwalkar
Amir Houmansadr
FedML
130
11
0
08 Oct 2021
Towards Sample-efficient Apprenticeship Learning from Suboptimal
  Demonstration
Towards Sample-efficient Apprenticeship Learning from Suboptimal Demonstration
Letian Chen
Rohan R. Paleja
Matthew C. Gombolay
38
2
0
08 Oct 2021
End-to-End Supermask Pruning: Learning to Prune Image Captioning Models
End-to-End Supermask Pruning: Learning to Prune Image Captioning Models
J. Tan
C. Chan
Joon Huang Chuah
VLM
124
16
0
07 Oct 2021
Universality of Winning Tickets: A Renormalization Group Perspective
Universality of Winning Tickets: A Renormalization Group Perspective
William T. Redman
Tianlong Chen
Zhangyang Wang
Akshunna S. Dogra
UQCV
92
7
0
07 Oct 2021
Efficient and Private Federated Learning with Partially Trainable
  Networks
Efficient and Private Federated Learning with Partially Trainable Networks
Hakim Sidahmed
Zheng Xu
Ankush Garg
Yuan Cao
Mingqing Chen
FedML
122
13
0
06 Oct 2021
On the Interplay Between Sparsity, Naturalness, Intelligibility, and
  Prosody in Speech Synthesis
On the Interplay Between Sparsity, Naturalness, Intelligibility, and Prosody in Speech Synthesis
Cheng-I Jeff Lai
Erica Cooper
Yang Zhang
Shiyu Chang
Kaizhi Qian
...
Yung-Sung Chuang
Alexander H. Liu
Junichi Yamagishi
David D. Cox
James R. Glass
64
6
0
04 Oct 2021
Induction, Popper, and machine learning
Induction, Popper, and machine learning
Bruce Nielson
Daniel C. Elton
AI4CE
18
2
0
02 Oct 2021
Learning Compact Representations of Neural Networks using DiscriminAtive
  Masking (DAM)
Learning Compact Representations of Neural Networks using DiscriminAtive Masking (DAM)
Jie Bu
Arka Daw
M. Maruf
Anuj Karpatne
107
5
0
01 Oct 2021
Powerpropagation: A sparsity inducing weight reparameterisation
Powerpropagation: A sparsity inducing weight reparameterisation
Jonathan Richard Schwarz
Siddhant M. Jayakumar
Razvan Pascanu
P. Latham
Yee Whye Teh
192
55
0
01 Oct 2021
Prune Your Model Before Distill It
Prune Your Model Before Distill It
Jinhyuk Park
Albert No
VLM
132
27
0
30 Sep 2021
RED++ : Data-Free Pruning of Deep Neural Networks via Input Splitting
  and Output Merging
RED++ : Data-Free Pruning of Deep Neural Networks via Input Splitting and Output Merging
Edouard Yvinec
Arnaud Dapogny
Matthieu Cord
Kévin Bailly
115
17
0
30 Sep 2021
Recent Advances of Continual Learning in Computer Vision: An Overview
Recent Advances of Continual Learning in Computer Vision: An Overview
Haoxuan Qu
Hossein Rahmani
Li Xu
Bryan M. Williams
Jun Liu
VLMCLL
136
77
0
23 Sep 2021
Neural network relief: a pruning algorithm based on neural activity
Neural network relief: a pruning algorithm based on neural activity
Aleksandr Dekhovich
David Tax
M. Sluiter
Miguel A. Bessa
115
11
0
22 Sep 2021
Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis
Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis
Zeyuan Yin
Ye Yuan
Panfeng Guo
Pan Zhou
FedML
65
7
0
22 Sep 2021
Reproducibility Study: Comparing Rewinding and Fine-tuning in Neural
  Network Pruning
Reproducibility Study: Comparing Rewinding and Fine-tuning in Neural Network Pruning
Szymon Mikler
AAML
32
2
0
20 Sep 2021
Previous
123...272829...394041
Next