Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1803.03635
Cited By
v1
v2
v3
v4
v5 (latest)
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
9 March 2018
Jonathan Frankle
Michael Carbin
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"
50 / 2,030 papers shown
Title
Finding trainable sparse networks through Neural Tangent Transfer
Tianlin Liu
Friedemann Zenke
67
35
0
15 Jun 2020
Neural gradients are near-lognormal: improved quantized and sparse training
Brian Chmiel
Liad Ben-Uri
Moran Shkolnik
Elad Hoffer
Ron Banner
Daniel Soudry
MQ
65
5
0
15 Jun 2020
Optimal Lottery Tickets via SubsetSum: Logarithmic Over-Parameterization is Sufficient
Ankit Pensia
Shashank Rajput
Alliot Nagle
Harit Vishwakarma
Dimitris Papailiopoulos
62
104
0
14 Jun 2020
High-contrast "gaudy" images improve the training of deep neural network models of visual cortex
Benjamin R. Cowley
Jonathan W. Pillow
41
10
0
13 Jun 2020
Dynamic Model Pruning with Feedback
Tao R. Lin
Sebastian U. Stich
Luis Barba
Daniil Dmitriev
Martin Jaggi
163
204
0
12 Jun 2020
A Practical Sparse Approximation for Real Time Recurrent Learning
Jacob Menick
Erich Elsen
Utku Evci
Simon Osindero
Karen Simonyan
Alex Graves
89
32
0
12 Jun 2020
How many winning tickets are there in one DNN?
Kathrin Grosse
Michael Backes
UQCV
36
2
0
12 Jun 2020
Neural Path Features and Neural Path Kernel : Understanding the role of gates in deep learning
Chandrashekar Lakshminarayanan
Amit Singh
AI4CE
54
10
0
11 Jun 2020
Convolutional neural networks compression with low rank and sparse tensor decompositions
Pavel Kaloshin
36
1
0
11 Jun 2020
Pruning neural networks without any data by iteratively conserving synaptic flow
Hidenori Tanaka
D. Kunin
Daniel L. K. Yamins
Surya Ganguli
198
650
0
09 Jun 2020
Towards More Practical Adversarial Attacks on Graph Neural Networks
Jiaqi Ma
Shuangrui Ding
Qiaozhu Mei
AAML
73
122
0
09 Jun 2020
A Framework for Neural Network Pruning Using Gibbs Distributions
Alex Labach
S. Valaee
33
5
0
08 Jun 2020
Differentiable Neural Input Search for Recommender Systems
Weiyu Cheng
Yanyan Shen
Linpeng Huang
71
36
0
08 Jun 2020
Neural Sparse Representation for Image Restoration
Yuchen Fan
Jiahui Yu
Yiqun Mei
Yulun Zhang
Y. Fu
Ding Liu
Thomas S. Huang
38
31
0
08 Jun 2020
An Empirical Analysis of the Impact of Data Augmentation on Knowledge Distillation
Deepan Das
Haley Massa
Abhimanyu Kulkarni
Theodoros Rekatsinas
58
18
0
06 Jun 2020
Accelerating Natural Language Understanding in Task-Oriented Dialog
Ojas Ahuja
Shrey Desai
VLM
20
1
0
05 Jun 2020
An Overview of Neural Network Compression
James OÑeill
AI4CE
143
99
0
05 Jun 2020
Shapley Value as Principled Metric for Structured Network Pruning
Marco Ancona
Cengiz Öztireli
Markus Gross
60
9
0
02 Jun 2020
Sparse Perturbations for Improved Convergence in Stochastic Zeroth-Order Optimization
Mayumi Ohta
Nathaniel Berger
Artem Sokolov
Stefan Riezler
ODL
38
9
0
02 Jun 2020
Pruning via Iterative Ranking of Sensitivity Statistics
Stijn Verdenius
M. Stol
Patrick Forré
AAML
80
38
0
01 Jun 2020
Transferring Inductive Biases through Knowledge Distillation
Samira Abnar
Mostafa Dehghani
Willem H. Zuidema
90
60
0
31 May 2020
Geometric algorithms for predicting resilience and recovering damage in neural networks
G. Raghavan
Jiayi Li
Matt Thomson
AAML
22
0
0
23 May 2020
Feature Purification: How Adversarial Training Performs Robust Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
MLT
AAML
122
151
0
20 May 2020
Dynamic Sparsity Neural Networks for Automatic Speech Recognition
Zhaofeng Wu
Ding Zhao
Qiao Liang
Jiahui Yu
Anmol Gulati
Ruoming Pang
51
41
0
16 May 2020
Joint Progressive Knowledge Distillation and Unsupervised Domain Adaptation
Le Thanh Nguyen-Meidine
Eric Granger
M. Kiran
Jose Dolz
Louis-Antoine Blais-Morin
78
23
0
16 May 2020
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers
Junjie Liu
Zhe Xu
Runbin Shi
R. Cheung
Hayden Kwok-Hay So
62
121
0
14 May 2020
RSO: A Gradient Free Sampling Based Approach For Training Deep Neural Networks
Rohun Tripathi
Bharat Singh
38
6
0
12 May 2020
On the Transferability of Winning Tickets in Non-Natural Image Datasets
M. Sabatelli
M. Kestemont
Pierre Geurts
62
15
0
11 May 2020
Data-Free Network Quantization With Adversarial Knowledge Distillation
Yoojin Choi
Jihwan P. Choi
Mostafa El-Khamy
Jungwon Lee
MQ
76
121
0
08 May 2020
Efficient Exact Verification of Binarized Neural Networks
Kai Jia
Martin Rinard
AAML
MQ
46
59
0
07 May 2020
Sources of Transfer in Multilingual Named Entity Recognition
David Mueller
Nicholas Andrews
Mark Dredze
50
21
0
02 May 2020
When BERT Plays the Lottery, All Tickets Are Winning
Sai Prasanna
Anna Rogers
Anna Rumshisky
MILM
86
187
0
01 May 2020
Pruning artificial neural networks: a way to find well-generalizing, high-entropy sharp minima
Enzo Tartaglione
Andrea Bragagnolo
Marco Grangetto
66
12
0
30 Apr 2020
Out-of-the-box channel pruned networks
Ragav Venkatesan
Gurumurthy Swaminathan
Xiong Zhou
Anna Luo
29
0
0
30 Apr 2020
Learning to Learn to Disambiguate: Meta-Learning for Few-Shot Word Sense Disambiguation
Nithin Holla
Pushkar Mishra
H. Yannakoudakis
Ekaterina Shutova
88
28
0
29 Apr 2020
WoodFisher: Efficient Second-Order Approximation for Neural Network Compression
Sidak Pal Singh
Dan Alistarh
57
28
0
29 Apr 2020
Masking as an Efficient Alternative to Finetuning for Pretrained Language Models
Mengjie Zhao
Tao R. Lin
Fei Mi
Martin Jaggi
Hinrich Schütze
75
120
0
26 Apr 2020
How fine can fine-tuning be? Learning efficient language models
Evani Radiya-Dixit
Xin Wang
53
66
0
24 Apr 2020
Convolution-Weight-Distribution Assumption: Rethinking the Criteria of Channel Pruning
Zhongzhan Huang
Wenqi Shao
Xinjiang Wang
Liang Lin
Ping Luo
75
55
0
24 Apr 2020
SIPA: A Simple Framework for Efficient Networks
Gihun Lee
Sangmin Bae
Jaehoon Oh
Seyoung Yun
19
1
0
24 Apr 2020
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
Fanghui Liu
Xiaolin Huang
Yudong Chen
Johan A. K. Suykens
BDL
124
176
0
23 Apr 2020
Lottery Hypothesis based Unsupervised Pre-training for Model Compression in Federated Learning
Sohei Itahara
Takayuki Nishio
M. Morikura
Koji Yamamoto
49
12
0
21 Apr 2020
Neural Status Registers
Lukas Faber
Roger Wattenhofer
35
9
0
15 Apr 2020
Prune2Edge: A Multi-Phase Pruning Pipelines to Deep Ensemble Learning in IIoT
Besher Alhalabi
M. Gaber
S. Basurra
18
1
0
09 Apr 2020
LadaBERT: Lightweight Adaptation of BERT through Hybrid Model Compression
Yihuan Mao
Yujing Wang
Chufan Wu
Chen Zhang
Yang-Feng Wang
Yaming Yang
Quanlu Zhang
Yunhai Tong
Jing Bai
58
74
0
08 Apr 2020
Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio
Zhengsu Chen
J. Niu
Lingxi Xie
Xuefeng Liu
Longhui Wei
Qi Tian
54
12
0
06 Apr 2020
Composition of Saliency Metrics for Channel Pruning with a Myopic Oracle
Kaveena Persand
Andrew Anderson
David Gregg
26
2
0
03 Apr 2020
Learning Sparse & Ternary Neural Networks with Entropy-Constrained Trained Ternarization (EC2T)
Arturo Marbán
Daniel Becking
Simon Wiedemann
Wojciech Samek
MQ
51
12
0
02 Apr 2020
Nonconvex sparse regularization for deep neural networks and its optimality
Ilsang Ohn
Yongdai Kim
61
19
0
26 Mar 2020
CAZSL: Zero-Shot Regression for Pushing Models by Generalizing Through Context
Wenyu Zhang
Skyler Seto
Devesh K. Jha
71
5
0
26 Mar 2020
Previous
1
2
3
...
36
37
38
39
40
41
Next