ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.03635
  4. Cited By
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
v1v2v3v4v5 (latest)

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

9 March 2018
Jonathan Frankle
Michael Carbin
ArXiv (abs)PDFHTML

Papers citing "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"

50 / 2,030 papers shown
Title
Born-Again Tree Ensembles
Born-Again Tree Ensembles
Thibaut Vidal
Toni Pacheco
Maximilian Schiffer
138
54
0
24 Mar 2020
Steepest Descent Neural Architecture Optimization: Escaping Local
  Optimum with Signed Neural Splitting
Steepest Descent Neural Architecture Optimization: Escaping Local Optimum with Signed Neural Splitting
Lemeng Wu
Mao Ye
Qi Lei
Jason D. Lee
Qiang Liu
83
15
0
23 Mar 2020
Convergence of Artificial Intelligence and High Performance Computing on
  NSF-supported Cyberinfrastructure
Convergence of Artificial Intelligence and High Performance Computing on NSF-supported Cyberinfrastructure
Eliu A. Huerta
Asad Khan
Edward Davis
Colleen Bushell
W. Gropp
...
S. Koric
William T. C. Kramer
Brendan McGinty
Kenton McHenry
Aaron Saxton
AI4CE
107
45
0
18 Mar 2020
SASL: Saliency-Adaptive Sparsity Learning for Neural Network
  Acceleration
SASL: Saliency-Adaptive Sparsity Learning for Neural Network Acceleration
Jun Shi
Jianfeng Xu
K. Tasaka
Zhibo Chen
74
25
0
12 Mar 2020
How Powerful Are Randomly Initialized Pointcloud Set Functions?
How Powerful Are Randomly Initialized Pointcloud Set Functions?
Aditya Sanghi
P. Jayaraman
3DPC
46
3
0
11 Mar 2020
Towards CRISP-ML(Q): A Machine Learning Process Model with Quality
  Assurance Methodology
Towards CRISP-ML(Q): A Machine Learning Process Model with Quality Assurance Methodology
Stefan Studer
T. Bui
C. Drescher
A. Hanuschkin
Ludwig Winkler
S. Peters
Klaus-Robert Muller
130
178
0
11 Mar 2020
Pruned Neural Networks are Surprisingly Modular
Pruned Neural Networks are Surprisingly Modular
Daniel Filan
Shlomi Hod
Cody Wild
Andrew Critch
Stuart J. Russell
30
8
0
10 Mar 2020
Channel Pruning via Optimal Thresholding
Channel Pruning via Optimal Thresholding
Yun Ye
Ganmei You
Jong-Kae Fwu
Xia Zhu
Q. Yang
Yuan Zhu
47
12
0
10 Mar 2020
$Π-$nets: Deep Polynomial Neural Networks
Π−Π-Π−nets: Deep Polynomial Neural Networks
Grigorios G. Chrysos
Stylianos Moschoglou
Giorgos Bouritsas
Yannis Panagakis
Jiankang Deng
Stefanos Zafeiriou
73
60
0
08 Mar 2020
FedLoc: Federated Learning Framework for Data-Driven Cooperative
  Localization and Location Data Processing
FedLoc: Federated Learning Framework for Data-Driven Cooperative Localization and Location Data Processing
Feng Yin
Zhidi Lin
Yue Xu
Qinglei Kong
Deshi Li
Sergios Theodoridis
Shuguang Cui
Cui
FedML
135
4
0
08 Mar 2020
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
286
1,056
0
06 Mar 2020
Towards Practical Lottery Ticket Hypothesis for Adversarial Training
Towards Practical Lottery Ticket Hypothesis for Adversarial Training
Bai Li
Shiqi Wang
Yunhan Jia
Yantao Lu
Zhenyu Zhong
Lawrence Carin
Suman Jana
AAML
142
14
0
06 Mar 2020
Train-by-Reconnect: Decoupling Locations of Weights from their Values
Train-by-Reconnect: Decoupling Locations of Weights from their Values
Yushi Qiu
R. Suda
20
0
0
05 Mar 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
306
388
0
05 Mar 2020
Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection
Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection
Mao Ye
Chengyue Gong
Lizhen Nie
Denny Zhou
Adam R. Klivans
Qiang Liu
91
111
0
03 Mar 2020
A New MRAM-based Process In-Memory Accelerator for Efficient Neural
  Network Training with Floating Point Precision
A New MRAM-based Process In-Memory Accelerator for Efficient Neural Network Training with Floating Point Precision
Hongjie Wang
Yang Zhao
Chaojian Li
Yue Wang
Yingyan Lin
35
14
0
02 Mar 2020
MBGD-RDA Training and Rule Pruning for Concise TSK Fuzzy Regression
  Models
MBGD-RDA Training and Rule Pruning for Concise TSK Fuzzy Regression Models
Dongrui Wu
10
1
0
01 Mar 2020
Channel Equilibrium Networks for Learning Deep Representation
Channel Equilibrium Networks for Learning Deep Representation
Wenqi Shao
Shitao Tang
Xingang Pan
Ping Tan
Xiaogang Wang
Ping Luo
67
17
0
29 Feb 2020
Learned Threshold Pruning
Learned Threshold Pruning
K. Azarian
Yash Bhalgat
Jinwon Lee
Tijmen Blankevoort
MQ
74
38
0
28 Feb 2020
Learning in the Frequency Domain
Learning in the Frequency Domain
Kai Xu
Minghai Qin
Fei Sun
Yuhao Wang
Yen-kuang Chen
Fengbo Ren
118
409
0
27 Feb 2020
A Primer in BERTology: What we know about how BERT works
A Primer in BERTology: What we know about how BERT works
Anna Rogers
Olga Kovaleva
Anna Rumshisky
OffRL
122
1,505
0
27 Feb 2020
Deep Randomized Neural Networks
Deep Randomized Neural Networks
Claudio Gallicchio
Simone Scardapane
OOD
90
65
0
27 Feb 2020
Compressing Large-Scale Transformer-Based Models: A Case Study on BERT
Compressing Large-Scale Transformer-Based Models: A Case Study on BERT
Prakhar Ganesh
Yao Chen
Xin Lou
Mohammad Ali Khan
Yifan Yang
Hassan Sajjad
Preslav Nakov
Deming Chen
Marianne Winslett
AI4CE
124
201
0
27 Feb 2020
Train Large, Then Compress: Rethinking Model Size for Efficient Training
  and Inference of Transformers
Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
Zhuohan Li
Eric Wallace
Sheng Shen
Kevin Lin
Kurt Keutzer
Dan Klein
Joseph E. Gonzalez
126
151
0
26 Feb 2020
Predicting Neural Network Accuracy from Weights
Predicting Neural Network Accuracy from Weights
Thomas Unterthiner
Daniel Keysers
Sylvain Gelly
Olivier Bousquet
Ilya O. Tolstikhin
69
107
0
26 Feb 2020
HYDRA: Pruning Adversarially Robust Neural Networks
HYDRA: Pruning Adversarially Robust Neural Networks
Vikash Sehwag
Shiqi Wang
Prateek Mittal
Suman Jana
AAML
63
25
0
24 Feb 2020
The Early Phase of Neural Network Training
The Early Phase of Neural Network Training
Jonathan Frankle
D. Schwab
Ari S. Morcos
94
174
0
24 Feb 2020
Neuron Shapley: Discovering the Responsible Neurons
Neuron Shapley: Discovering the Responsible Neurons
Amirata Ghorbani
James Zou
FAttTDI
63
113
0
23 Feb 2020
Compressing BERT: Studying the Effects of Weight Pruning on Transfer
  Learning
Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning
Mitchell A. Gordon
Kevin Duh
Nicholas Andrews
VLM
72
343
0
19 Feb 2020
Robust Pruning at Initialization
Robust Pruning at Initialization
Soufiane Hayou
Jean-François Ton
Arnaud Doucet
Yee Whye Teh
43
47
0
19 Feb 2020
Identifying Critical Neurons in ANN Architectures using Mixed Integer
  Programming
Identifying Critical Neurons in ANN Architectures using Mixed Integer Programming
M. Elaraby
Guy Wolf
Margarida Carvalho
38
5
0
17 Feb 2020
DeepLight: Deep Lightweight Feature Interactions for Accelerating CTR
  Predictions in Ad Serving
DeepLight: Deep Lightweight Feature Interactions for Accelerating CTR Predictions in Ad Serving
Wei Deng
Junwei Pan
Tian Zhou
Deguang Kong
Aaron Flores
Guang Lin
25
4
0
17 Feb 2020
The Differentially Private Lottery Ticket Mechanism
The Differentially Private Lottery Ticket Mechanism
Lovedeep Gondara
Ke Wang
Ricardo Silva Carvalho
13
3
0
16 Feb 2020
Lookahead: A Far-Sighted Alternative of Magnitude-based Pruning
Lookahead: A Far-Sighted Alternative of Magnitude-based Pruning
Sejun Park
Jaeho Lee
Sangwoo Mo
Jinwoo Shin
56
94
0
12 Feb 2020
A study of local optima for learning feature interactions using neural
  networks
A study of local optima for learning feature interactions using neural networks
Yangzi Guo
Adrian Barbu
115
1
0
11 Feb 2020
Deep Gated Networks: A framework to understand training and
  generalisation in deep learning
Deep Gated Networks: A framework to understand training and generalisation in deep learning
Chandrashekar Lakshminarayanan
Amit Singh
AI4CE
39
1
0
10 Feb 2020
Calibrate and Prune: Improving Reliability of Lottery Tickets Through
  Prediction Calibration
Calibrate and Prune: Improving Reliability of Lottery Tickets Through Prediction Calibration
Bindya Venkatesh
Jayaraman J. Thiagarajan
Kowshik Thopalli
P. Sattigeri
67
14
0
10 Feb 2020
Convolutional Neural Network Pruning Using Filter Attenuation
Convolutional Neural Network Pruning Using Filter Attenuation
Morteza Mousa Pasandi
M. Hajabdollahi
N. Karimi
S. Samavi
S. Shirani
3DPC
27
3
0
09 Feb 2020
Soft Threshold Weight Reparameterization for Learnable Sparsity
Soft Threshold Weight Reparameterization for Learnable Sparsity
Aditya Kusupati
Vivek Ramanujan
Raghav Somani
Mitchell Wortsman
Prateek Jain
Sham Kakade
Ali Farhadi
161
247
0
08 Feb 2020
PixelHop++: A Small Successive-Subspace-Learning-Based (SSL-based) Model
  for Image Classification
PixelHop++: A Small Successive-Subspace-Learning-Based (SSL-based) Model for Image Classification
Yueru Chen
Mozhdeh Rouhsedaghat
Suya You
Raghuveer Rao
C.-C. Jay Kuo
57
70
0
08 Feb 2020
Activation Density driven Energy-Efficient Pruning in Training
Activation Density driven Energy-Efficient Pruning in Training
Timothy Foldy-Porto
Yeshwanth Venkatesha
Priyadarshini Panda
43
4
0
07 Feb 2020
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
Canwen Xu
Wangchunshu Zhou
Tao Ge
Furu Wei
Ming Zhou
336
201
0
07 Feb 2020
Multimodal Controller for Generative Models
Multimodal Controller for Generative Models
Enmao Diao
Jie Ding
Vahid Tarokh
60
3
0
07 Feb 2020
BABO: Background Activation Black-Out for Efficient Object Detection
BABO: Background Activation Black-Out for Efficient Object Detection
Byungseok Roh
Hankyu Cho
Myung-Ho Ju
Soon Hyung Pyo
ObjD
16
1
0
05 Feb 2020
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
114
276
0
03 Feb 2020
MEMO: A Deep Network for Flexible Combination of Episodic Memories
MEMO: A Deep Network for Flexible Combination of Episodic Memories
Andrea Banino
Adria Puigdomenech Badia
Raphael Köster
Martin Chadwick
V. Zambaldi
Demis Hassabis
Caswell Barry
M. Botvinick
D. Kumaran
Charles Blundell
KELM
84
35
0
29 Jan 2020
Progressive Local Filter Pruning for Image Retrieval Acceleration
Progressive Local Filter Pruning for Image Retrieval Acceleration
Xiaodong Wang
Zhedong Zheng
Yang He
Fei Yan
Zhi-qiang Zeng
Yi Yang
82
35
0
24 Jan 2020
Channel Pruning via Automatic Structure Search
Channel Pruning via Automatic Structure Search
Mingbao Lin
Rongrong Ji
Yuxin Zhang
Baochang Zhang
Yongjian Wu
Yonghong Tian
124
246
0
23 Jan 2020
Filter Sketch for Network Pruning
Filter Sketch for Network Pruning
Mingbao Lin
Liujuan Cao
Shaojie Li
QiXiang Ye
Yonghong Tian
Jianzhuang Liu
Q. Tian
Rongrong Ji
CLIP3DPC
171
82
0
23 Jan 2020
Generalization Bounds for High-dimensional M-estimation under Sparsity
  Constraint
Generalization Bounds for High-dimensional M-estimation under Sparsity Constraint
Xiao-Tong Yuan
Ping Li
70
2
0
20 Jan 2020
Previous
123...3738394041
Next