ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.03635
  4. Cited By
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
v1v2v3v4v5 (latest)

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

9 March 2018
Jonathan Frankle
Michael Carbin
ArXiv (abs)PDFHTML

Papers citing "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"

50 / 2,031 papers shown
Title
Weight Reparametrization for Budget-Aware Network Pruning
Weight Reparametrization for Budget-Aware Network Pruning
Robin Dupont
H. Sahbi
Guillaume Michel
43
1
0
08 Jul 2021
Collaboration of Experts: Achieving 80% Top-1 Accuracy on ImageNet with
  100M FLOPs
Collaboration of Experts: Achieving 80% Top-1 Accuracy on ImageNet with 100M FLOPs
Yikang Zhang
Zhuo Chen
Zhaobai Zhong
MoE
94
8
0
08 Jul 2021
Immunization of Pruning Attack in DNN Watermarking Using Constant Weight
  Code
Immunization of Pruning Attack in DNN Watermarking Using Constant Weight Code
Minoru Kuribayashi
Tatsuya Yasui
Asad U. Malik
N. Funabiki
AAML
30
1
0
07 Jul 2021
Universal approximation and model compression for radial neural networks
Universal approximation and model compression for radial neural networks
I. Ganev
Twan van Laarhoven
Robin Walters
75
10
0
06 Jul 2021
Generalizing Nucleus Recognition Model in Multi-source Images via
  Pruning
Generalizing Nucleus Recognition Model in Multi-source Images via Pruning
Jiatong Cai
Chenglu Zhu
C. Cui
Honglin Li
Tong Wu
Shichuan Zhang
Lin Yang
60
8
0
06 Jul 2021
Connectivity Matters: Neural Network Pruning Through the Lens of
  Effective Sparsity
Connectivity Matters: Neural Network Pruning Through the Lens of Effective Sparsity
Artem Vysogorets
Julia Kempe
107
22
0
05 Jul 2021
One-Cycle Pruning: Pruning ConvNets Under a Tight Training Budget
One-Cycle Pruning: Pruning ConvNets Under a Tight Training Budget
Nathan Hubens
M. Mancas
B. Gosselin
Marius Preda
T. Zaharia
71
8
0
05 Jul 2021
Partition and Code: learning how to compress graphs
Partition and Code: learning how to compress graphs
Giorgos Bouritsas
Andreas Loukas
Nikolaos Karalias
M. Bronstein
77
13
0
05 Jul 2021
A Generalized Lottery Ticket Hypothesis
A Generalized Lottery Ticket Hypothesis
Ibrahim Alabdulmohsin
L. Markeeva
Daniel Keysers
Ilya O. Tolstikhin
43
6
0
03 Jul 2021
A Lottery Ticket Hypothesis Framework for Low-Complexity Device-Robust
  Neural Acoustic Scene Classification
A Lottery Ticket Hypothesis Framework for Low-Complexity Device-Robust Neural Acoustic Scene Classification
Hao Yen
Chao-Han Huck Yang
Hu Hu
Sabato Marco Siniscalchi
Qing Wang
...
Yuanjun Zhao
Yuzhong Wu
Yannan Wang
Jun Du
Chin-Hui Lee
52
17
0
03 Jul 2021
Learned Token Pruning for Transformers
Learned Token Pruning for Transformers
Sehoon Kim
Sheng Shen
D. Thorsley
A. Gholami
Woosuk Kwon
Joseph Hassoun
Kurt Keutzer
79
157
0
02 Jul 2021
What do End-to-End Speech Models Learn about Speaker, Language and
  Channel Information? A Layer-wise and Neuron-level Analysis
What do End-to-End Speech Models Learn about Speaker, Language and Channel Information? A Layer-wise and Neuron-level Analysis
Shammur A. Chowdhury
Nadir Durrani
Ahmed M. Ali
94
16
0
01 Jul 2021
Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win
  the Jackpot?
Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?
Xiaolong Ma
Geng Yuan
Xuan Shen
Tianlong Chen
Xuxi Chen
...
Ning Liu
Minghai Qin
Sijia Liu
Zhangyang Wang
Yanzhi Wang
159
64
0
01 Jul 2021
Analytic Insights into Structure and Rank of Neural Network Hessian Maps
Analytic Insights into Structure and Rank of Neural Network Hessian Maps
Sidak Pal Singh
Gregor Bachmann
Thomas Hofmann
FAtt
101
37
0
30 Jun 2021
Tackling Catastrophic Forgetting and Background Shift in Continual
  Semantic Segmentation
Tackling Catastrophic Forgetting and Background Shift in Continual Semantic Segmentation
Arthur Douillard
Yifu Chen
Arnaud Dapogny
Matthieu Cord
CLL
61
21
0
29 Jun 2021
Laplace Redux -- Effortless Bayesian Deep Learning
Laplace Redux -- Effortless Bayesian Deep Learning
Erik A. Daxberger
Agustinus Kristiadi
Alexander Immer
Runa Eschenhagen
Matthias Bauer
Philipp Hennig
BDLUQCV
240
315
0
28 Jun 2021
LNS-Madam: Low-Precision Training in Logarithmic Number System using
  Multiplicative Weight Update
LNS-Madam: Low-Precision Training in Logarithmic Number System using Multiplicative Weight Update
Jiawei Zhao
Steve Dai
Rangharajan Venkatesan
Brian Zimmer
Mustafa Ali
Xuan Li
Brucek Khailany
B. Dally
Anima Anandkumar
MQ
72
13
0
26 Jun 2021
Sparse Flows: Pruning Continuous-depth Models
Sparse Flows: Pruning Continuous-depth Models
Lucas Liebenwein
Ramin Hasani
Alexander Amini
Daniela Rus
112
17
0
24 Jun 2021
AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural
  Networks
AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks
Alexandra Peste
Eugenia Iofinova
Adrian Vladu
Dan Alistarh
AI4CE
418
72
0
23 Jun 2021
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Shiwei Liu
Tianlong Chen
Xiaohan Chen
Zahra Atashgahi
Lu Yin
Huanyu Kou
Li Shen
Mykola Pechenizkiy
Zhangyang Wang
Decebal Constantin Mocanu
126
115
0
19 Jun 2021
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based
  Masked Language-models
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models
Elad Ben-Zaken
Shauli Ravfogel
Yoav Goldberg
221
1,245
0
18 Jun 2021
Pruning Randomly Initialized Neural Networks with Iterative
  Randomization
Pruning Randomly Initialized Neural Networks with Iterative Randomization
Daiki Chijiwa
Shin'ya Yamaguchi
Yasutoshi Ida
Kenji Umakoshi
T. Inoue
64
26
0
17 Jun 2021
A Random CNN Sees Objects: One Inductive Bias of CNN and Its
  Applications
A Random CNN Sees Objects: One Inductive Bias of CNN and Its Applications
Yun Cao
Jianxin Wu
SSL
73
27
0
17 Jun 2021
Improving DNN Fault Tolerance using Weight Pruning and Differential
  Crossbar Mapping for ReRAM-based Edge AI
Improving DNN Fault Tolerance using Weight Pruning and Differential Crossbar Mapping for ReRAM-based Edge AI
Geng Yuan
Zhiheng Liao
Xiaolong Ma
Yuxuan Cai
Zhenglun Kong
...
Hongwu Peng
Ning Liu
Ao Ren
Jinhui Wang
Yanzhi Wang
AAML
104
33
0
16 Jun 2021
Masked Training of Neural Networks with Partial Gradients
Masked Training of Neural Networks with Partial Gradients
Amirkeivan Mohtashami
Martin Jaggi
Sebastian U. Stich
116
23
0
16 Jun 2021
Algorithm to Compilation Co-design: An Integrated View of Neural Network
  Sparsity
Algorithm to Compilation Co-design: An Integrated View of Neural Network Sparsity
Fu-Ming Guo
Austin Huang
27
1
0
16 Jun 2021
Efficient Micro-Structured Weight Unification and Pruning for Neural
  Network Compression
Efficient Micro-Structured Weight Unification and Pruning for Neural Network Compression
Sheng Lin
Wei Jiang
Wei Wang
Kaidi Xu
Yanzhi Wang
Shan Liu
Songnan Li
27
1
0
15 Jun 2021
CoDERT: Distilling Encoder Representations with Co-learning for
  Transducer-based Speech Recognition
CoDERT: Distilling Encoder Representations with Co-learning for Transducer-based Speech Recognition
Rupak Vignesh Swaminathan
Brian King
Grant P. Strimel
J. Droppo
Athanasios Mouchtaris
74
15
0
14 Jun 2021
Why Can You Lay Off Heads? Investigating How BERT Heads Transfer
Why Can You Lay Off Heads? Investigating How BERT Heads Transfer
Ting-Rui Chiang
Yun-Nung Chen
36
0
0
14 Jun 2021
Towards Understanding Iterative Magnitude Pruning: Why Lottery Tickets
  Win
Towards Understanding Iterative Magnitude Pruning: Why Lottery Tickets Win
Jaron Maene
Mingxiao Li
Marie-Francine Moens
58
15
0
13 Jun 2021
Sparse PointPillars: Maintaining and Exploiting Input Sparsity to
  Improve Runtime on Embedded Systems
Sparse PointPillars: Maintaining and Exploiting Input Sparsity to Improve Runtime on Embedded Systems
Kyle Vedder
Eric Eaton
3DPC
50
13
0
12 Jun 2021
A Low-Compexity Deep Learning Framework For Acoustic Scene
  Classification
A Low-Compexity Deep Learning Framework For Acoustic Scene Classification
L. D. Pham
H. Tang
Anahid N. Jalali
Alexander Schindler
Ross King
49
2
0
12 Jun 2021
DECORE: Deep Compression with Reinforcement Learning
DECORE: Deep Compression with Reinforcement Learning
Manoj Alwani
Yang Wang
Vashisht Madhavan
AI4CE
69
44
0
11 Jun 2021
PARP: Prune, Adjust and Re-Prune for Self-Supervised Speech Recognition
PARP: Prune, Adjust and Re-Prune for Self-Supervised Speech Recognition
Cheng-I Jeff Lai
Yang Zhang
Alexander H. Liu
Shiyu Chang
Yi-Lun Liao
Yung-Sung Chuang
Kaizhi Qian
Sameer Khurana
David D. Cox
James R. Glass
VLM
159
78
0
10 Jun 2021
AKE-GNN: Effective Graph Learning with Adaptive Knowledge Exchange
AKE-GNN: Effective Graph Learning with Adaptive Knowledge Exchange
Liang Zeng
Jin Xu
Zijun Yao
Yanqiao Zhu
Jian Li
85
1
0
10 Jun 2021
Ex uno plures: Splitting One Model into an Ensemble of Subnetworks
Ex uno plures: Splitting One Model into an Ensemble of Subnetworks
Zhilu Zhang
Vianne R. Gao
M. Sabuncu
UQCV
96
6
0
09 Jun 2021
Handcrafted Backdoors in Deep Neural Networks
Handcrafted Backdoors in Deep Neural Networks
Sanghyun Hong
Nicholas Carlini
Alexey Kurakin
132
76
0
08 Jun 2021
XtremeDistilTransformers: Task Transfer for Task-agnostic Distillation
XtremeDistilTransformers: Task Transfer for Task-agnostic Distillation
Subhabrata Mukherjee
Ahmed Hassan Awadallah
Jianfeng Gao
53
22
0
08 Jun 2021
Chasing Sparsity in Vision Transformers: An End-to-End Exploration
Chasing Sparsity in Vision Transformers: An End-to-End Exploration
Tianlong Chen
Yu Cheng
Zhe Gan
Lu Yuan
Lei Zhang
Zhangyang Wang
ViT
70
224
0
08 Jun 2021
Dynamic Sparse Training for Deep Reinforcement Learning
Dynamic Sparse Training for Deep Reinforcement Learning
Ghada Sokar
Elena Mocanu
Decebal Constantin Mocanu
Mykola Pechenizkiy
Peter Stone
106
59
0
08 Jun 2021
Heavy Tails in SGD and Compressibility of Overparametrized Neural
  Networks
Heavy Tails in SGD and Compressibility of Overparametrized Neural Networks
Melih Barsbey
Romain Chor
Murat A. Erdogdu
Gaël Richard
Umut Simsekli
66
41
0
07 Jun 2021
MONCAE: Multi-Objective Neuroevolution of Convolutional Autoencoders
MONCAE: Multi-Objective Neuroevolution of Convolutional Autoencoders
Daniel Dimanov
E. Balaguer-Ballester
Colin Singleton
Shahin Rostami
34
7
0
07 Jun 2021
Top-KAST: Top-K Always Sparse Training
Top-KAST: Top-K Always Sparse Training
Siddhant M. Jayakumar
Razvan Pascanu
Jack W. Rae
Simon Osindero
Erich Elsen
181
100
0
07 Jun 2021
Efficient Lottery Ticket Finding: Less Data is More
Efficient Lottery Ticket Finding: Less Data is More
Zhenyu Zhang
Xuxi Chen
Tianlong Chen
Zhangyang Wang
111
54
0
06 Jun 2021
Self-Damaging Contrastive Learning
Self-Damaging Contrastive Learning
Ziyu Jiang
Tianlong Chen
Bobak J. Mortazavi
Zhangyang Wang
CLL
74
71
0
06 Jun 2021
Feature Flow Regularization: Improving Structured Sparsity in Deep
  Neural Networks
Feature Flow Regularization: Improving Structured Sparsity in Deep Neural Networks
Yue Wu
Yuan Lan
Luchan Zhang
Yang Xiang
44
6
0
05 Jun 2021
Can Subnetwork Structure be the Key to Out-of-Distribution
  Generalization?
Can Subnetwork Structure be the Key to Out-of-Distribution Generalization?
Dinghuai Zhang
Kartik Ahuja
Yilun Xu
Yisen Wang
Aaron Courville
OOD
97
96
0
05 Jun 2021
Solving hybrid machine learning tasks by traversing weight space
  geodesics
Solving hybrid machine learning tasks by traversing weight space geodesics
G. Raghavan
Matt Thomson
34
0
0
05 Jun 2021
Neural Architecture Search via Bregman Iterations
Neural Architecture Search via Bregman Iterations
Leon Bungert
Tim Roith
Daniel Tenbrinck
Martin Burger
29
3
0
04 Jun 2021
GANs Can Play Lottery Tickets Too
GANs Can Play Lottery Tickets Too
Xuxi Chen
Zhenyu Zhang
Yongduo Sui
Tianlong Chen
GAN
79
58
0
31 May 2021
Previous
123...293031...394041
Next