ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.03635
  4. Cited By
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
v1v2v3v4v5 (latest)

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

9 March 2018
Jonathan Frankle
Michael Carbin
ArXiv (abs)PDFHTML

Papers citing "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"

50 / 2,030 papers shown
Title
Neural Architecture Search of Deep Priors: Towards Continual Learning
  without Catastrophic Interference
Neural Architecture Search of Deep Priors: Towards Continual Learning without Catastrophic Interference
Martin Mundt
Iuliia Pliushch
Visvanathan Ramesh
UQCVBDL
56
6
0
14 Apr 2021
The Impact of Activation Sparsity on Overfitting in Convolutional Neural
  Networks
The Impact of Activation Sparsity on Overfitting in Convolutional Neural Networks
Karim Huesmann
Luis Garcia Rodriguez
Lars Linsen
Benjamin Risse
45
4
0
13 Apr 2021
Structural analysis of an all-purpose question answering model
Structural analysis of an all-purpose question answering model
Vincent Micheli
Quentin Heinrich
Franccois Fleuret
Wacim Belblidia
43
3
0
13 Apr 2021
Generalization bounds via distillation
Generalization bounds via distillation
Daniel J. Hsu
Ziwei Ji
Matus Telgarsky
Lan Wang
FedML
57
34
0
12 Apr 2021
A hybrid inference system for improved curvature estimation in the
  level-set method using machine learning
A hybrid inference system for improved curvature estimation in the level-set method using machine learning
Luis Ángel Larios-Cárdenas
Frédéric Gibou
56
6
0
07 Apr 2021
Going deeper with Image Transformers
Going deeper with Image Transformers
Hugo Touvron
Matthieu Cord
Alexandre Sablayrolles
Gabriel Synnaeve
Hervé Jégou
ViT
181
1,024
0
31 Mar 2021
Joint Learning of Neural Transfer and Architecture Adaptation for Image
  Recognition
Joint Learning of Neural Transfer and Architecture Adaptation for Image Recognition
Guangrun Wang
Liang Lin
Rongcong Chen
Guangcong Wang
Jiqi Zhang
OOD
47
9
0
31 Mar 2021
Fixing the Teacher-Student Knowledge Discrepancy in Distillation
Fixing the Teacher-Student Knowledge Discrepancy in Distillation
Jiangfan Han
Mengya Gao
Yujie Wang
Quanquan Li
Hongsheng Li
Xiaogang Wang
41
3
0
31 Mar 2021
The Elastic Lottery Ticket Hypothesis
The Elastic Lottery Ticket Hypothesis
Xiaohan Chen
Yu Cheng
Shuohang Wang
Zhe Gan
Jingjing Liu
Zhangyang Wang
OOD
81
34
0
30 Mar 2021
[Reproducibility Report] Rigging the Lottery: Making All Tickets Winners
[Reproducibility Report] Rigging the Lottery: Making All Tickets Winners
Varun Sundar
Rajat Vadiraj Dwaraknath
53
5
0
29 Mar 2021
Self-Constructing Neural Networks Through Random Mutation
Self-Constructing Neural Networks Through Random Mutation
Samuel Schmidgall
ODL3DV
30
1
0
29 Mar 2021
A Practical Survey on Faster and Lighter Transformers
A Practical Survey on Faster and Lighter Transformers
Quentin Fournier
G. Caron
Daniel Aloise
132
102
0
26 Mar 2021
RCT: Resource Constrained Training for Edge AI
RCT: Resource Constrained Training for Edge AI
Tian Huang
Yaoyu Zhang
Ming Yan
Qiufeng Wang
Rick Siow Mong Goh
82
8
0
26 Mar 2021
Active multi-fidelity Bayesian online changepoint detection
Active multi-fidelity Bayesian online changepoint detection
Gregory W. Gundersen
Diana Cai
Chuteng Zhou
Barbara E. Engelhardt
Ryan P. Adams
54
10
0
26 Mar 2021
Channel Scaling: A Scale-and-Select Approach for Transfer Learning
Channel Scaling: A Scale-and-Select Approach for Transfer Learning
Ken C. L. Wong
Satyananda Kashyap
Mehdi Moradi
30
3
0
22 Mar 2021
Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in
  Tiny Subspaces
Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in Tiny Subspaces
Tao Li
Lei Tan
Qinghua Tao
Yipeng Liu
Xiaolin Huang
85
10
0
20 Mar 2021
Cascade Weight Shedding in Deep Neural Networks: Benefits and Pitfalls
  for Network Pruning
Cascade Weight Shedding in Deep Neural Networks: Benefits and Pitfalls for Network Pruning
K. Azarian
Fatih Porikli
CVBM
43
0
0
19 Mar 2021
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural
  Networks by Pruning A Randomly Weighted Network
Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network
James Diffenderfer
B. Kailkhura
MQ
97
76
0
17 Mar 2021
Efficient Sparse Artificial Neural Networks
Efficient Sparse Artificial Neural Networks
Seyed Majid Naji
Azra Abtahi
F. Marvasti
49
3
0
13 Mar 2021
A Quadratic Actor Network for Model-Free Reinforcement Learning
A Quadratic Actor Network for Model-Free Reinforcement Learning
Matthias Weissenbacher
Yoshinobu Kawahara
15
0
0
11 Mar 2021
Recent Advances on Neural Network Pruning at Initialization
Recent Advances on Neural Network Pruning at Initialization
Huan Wang
Can Qin
Yue Bai
Yulun Zhang
Yun Fu
CVBM
96
67
0
11 Mar 2021
Quantization-Guided Training for Compact TinyML Models
Quantization-Guided Training for Compact TinyML Models
Sedigh Ghamari
Koray Ozcan
Thu Dinh
A. Melnikov
Juan Carvajal
Jan Ernst
S. Chai
MQ
60
17
0
10 Mar 2021
MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks
MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks
Alexandre Ramé
Rémy Sun
Matthieu Cord
UQCV
99
60
0
10 Mar 2021
Robustness to Pruning Predicts Generalization in Deep Neural Networks
Robustness to Pruning Predicts Generalization in Deep Neural Networks
Lorenz Kuhn
Clare Lyle
Aidan Gomez
Jonas Rothfuss
Y. Gal
91
14
0
10 Mar 2021
Proof-of-Learning: Definitions and Practice
Proof-of-Learning: Definitions and Practice
Hengrui Jia
Mohammad Yaghini
Christopher A. Choquette-Choo
Natalie Dullerud
Anvith Thudi
Varun Chandrasekaran
Nicolas Papernot
AAML
84
106
0
09 Mar 2021
Knowledge Evolution in Neural Networks
Knowledge Evolution in Neural Networks
Ahmed Taha
Abhinav Shrivastava
L. Davis
84
22
0
09 Mar 2021
Pufferfish: Communication-efficient Models At No Extra Cost
Pufferfish: Communication-efficient Models At No Extra Cost
Hongyi Wang
Saurabh Agarwal
Dimitris Papailiopoulos
85
59
0
05 Mar 2021
Artificial Neural Networks generated by Low Discrepancy Sequences
Artificial Neural Networks generated by Low Discrepancy Sequences
A. Keller
Matthijs Van Keirsbilck
35
5
0
05 Mar 2021
Teachers Do More Than Teach: Compressing Image-to-Image Models
Teachers Do More Than Teach: Compressing Image-to-Image Models
Qing Jin
Jian Ren
Oliver J. Woodford
Jiazhuo Wang
Geng Yuan
Yanzhi Wang
Sergey Tulyakov
78
56
0
05 Mar 2021
Clusterability in Neural Networks
Clusterability in Neural Networks
Daniel Filan
Stephen Casper
Shlomi Hod
Cody Wild
Andrew Critch
Stuart J. Russell
GNN
64
32
0
04 Mar 2021
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test
  Accuracy
Lost in Pruning: The Effects of Pruning Neural Networks beyond Test Accuracy
Lucas Liebenwein
Cenk Baykal
Brandon Carter
David K Gifford
Daniela Rus
AAML
84
73
0
04 Mar 2021
Improving Computational Efficiency in Visual Reinforcement Learning via
  Stored Embeddings
Improving Computational Efficiency in Visual Reinforcement Learning via Stored Embeddings
Lili Chen
Kimin Lee
A. Srinivas
Pieter Abbeel
OffRL
70
11
0
04 Mar 2021
The Rediscovery Hypothesis: Language Models Need to Meet Linguistics
The Rediscovery Hypothesis: Language Models Need to Meet Linguistics
Vassilina Nikoulina
Maxat Tezekbayev
Nuradil Kozhakhmet
Madina Babazhanova
Matthias Gallé
Z. Assylbekov
61
8
0
02 Mar 2021
Sparse Training Theory for Scalable and Efficient Agents
Sparse Training Theory for Scalable and Efficient Agents
Decebal Constantin Mocanu
Elena Mocanu
T. Pinto
Selima Curci
Phuong H. Nguyen
M. Gibescu
D. Ernst
Z. Vale
80
18
0
02 Mar 2021
AdeNet: Deep learning architecture that identifies damaged electrical
  insulators in power lines
AdeNet: Deep learning architecture that identifies damaged electrical insulators in power lines
Ademola Okerinde
L. Shamir
W. Hsu
T. Theis
40
3
0
02 Mar 2021
Early-Bird GCNs: Graph-Network Co-Optimization Towards More Efficient GCN Training and Inference via Drawing Early-Bird Lottery Tickets
Early-Bird GCNs: Graph-Network Co-Optimization Towards More Efficient GCN Training and Inference via Drawing Early-Bird Lottery Tickets
Haoran You
Zhihan Lu
Zijian Zhou
Y. Fu
Yingyan Lin
GNN
107
33
0
01 Mar 2021
Asymptotic Risk of Overparameterized Likelihood Models: Double Descent
  Theory for Deep Neural Networks
Asymptotic Risk of Overparameterized Likelihood Models: Double Descent Theory for Deep Neural Networks
Ryumei Nakada
Masaaki Imaizumi
49
2
0
28 Feb 2021
Data-Efficient GAN Training Beyond (Just) Augmentations: A Lottery
  Ticket Perspective
Data-Efficient GAN Training Beyond (Just) Augmentations: A Lottery Ticket Perspective
Tianlong Chen
Yu Cheng
Zhe Gan
Jingjing Liu
Zhangyang Wang
82
52
0
28 Feb 2021
Consistent Sparse Deep Learning: Theory and Computation
Consistent Sparse Deep Learning: Theory and Computation
Y. Sun
Qifan Song
F. Liang
BDL
83
30
0
25 Feb 2021
Understanding Catastrophic Forgetting and Remembering in Continual
  Learning with Optimal Relevance Mapping
Understanding Catastrophic Forgetting and Remembering in Continual Learning with Optimal Relevance Mapping
Prakhar Kaushik
Alex Gain
Adam Kortylewski
Alan Yuille
CLL
54
71
0
22 Feb 2021
Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not?
Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not?
Ning Liu
Geng Yuan
Zhengping Che
Xuan Shen
Xiaolong Ma
Qing Jin
Jian Ren
Jian Tang
Sijia Liu
Yanzhi Wang
92
32
0
19 Feb 2021
An Information-Theoretic Justification for Model Pruning
An Information-Theoretic Justification for Model Pruning
Berivan Isik
Tsachy Weissman
Albert No
164
37
0
16 Feb 2021
Accelerated Sparse Neural Training: A Provable and Efficient Method to
  Find N:M Transposable Masks
Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Itay Hubara
Brian Chmiel
Moshe Island
Ron Banner
S. Naor
Daniel Soudry
121
119
0
16 Feb 2021
SiMaN: Sign-to-Magnitude Network Binarization
SiMaN: Sign-to-Magnitude Network Binarization
Mingbao Lin
Rongrong Ji
Zi-Han Xu
Baochang Zhang
Chia-Wen Lin
Chia-Wen Lin
Ling Shao
AAMLMQ
84
28
0
16 Feb 2021
Scaling Up Exact Neural Network Compression by ReLU Stability
Scaling Up Exact Neural Network Compression by ReLU Stability
Thiago Serra
Xin Yu
Abhinav Kumar
Srikumar Ramalingam
66
24
0
15 Feb 2021
Neural Network Compression for Noisy Storage Devices
Neural Network Compression for Noisy Storage Devices
Berivan Isik
Kristy Choi
Xin-Yang Zheng
Tsachy Weissman
Stefano Ermon
H. P. Wong
Armin Alaghi
63
13
0
15 Feb 2021
ChipNet: Budget-Aware Pruning with Heaviside Continuous Approximations
ChipNet: Budget-Aware Pruning with Heaviside Continuous Approximations
Rishabh Tiwari
Udbhav Bamba
Arnav Chavan
D. K. Gupta
64
31
0
14 Feb 2021
Neural Architecture Search as Program Transformation Exploration
Neural Architecture Search as Program Transformation Exploration
Jack Turner
Elliot J. Crowley
Michael F. P. O'Boyle
62
14
0
12 Feb 2021
Dense for the Price of Sparse: Improved Performance of Sparsely
  Initialized Networks via a Subspace Offset
Dense for the Price of Sparse: Improved Performance of Sparsely Initialized Networks via a Subspace Offset
Ilan Price
Jared Tanner
63
15
0
12 Feb 2021
Learning from Shader Program Traces
Learning from Shader Program Traces
Yuting Yang
Connelly Barnes
Adam Finkelstein
29
3
0
08 Feb 2021
Previous
123...313233...394041
Next