Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1803.03635
Cited By
v1
v2
v3
v4
v5 (latest)
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
9 March 2018
Jonathan Frankle
Michael Carbin
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"
50 / 2,030 papers shown
Title
Bringing AI To Edge: From Deep Learning's Perspective
Di Liu
Hao Kong
Xiangzhong Luo
Weichen Liu
Ravi Subramaniam
116
123
0
25 Nov 2020
Two-Way Neural Machine Translation: A Proof of Concept for Bidirectional Translation Modeling using a Two-Dimensional Grid
Parnia Bahar
Christopher Brix
Hermann Ney
24
1
0
24 Nov 2020
PLOP: Learning without Forgetting for Continual Semantic Segmentation
Arthur Douillard
Yifu Chen
Arnaud Dapogny
Matthieu Cord
CLL
60
242
0
23 Nov 2020
Rethinking Weight Decay For Efficient Neural Network Pruning
Hugo Tessier
Vincent Gripon
Mathieu Léonardon
M. Arzel
T. Hannagan
David Bertrand
90
26
0
20 Nov 2020
Data-Informed Global Sparseness in Attention Mechanisms for Deep Neural Networks
Ileana Rugina
Rumen Dangovski
L. Jing
Preslav Nakov
Marin Soljacic
63
0
0
20 Nov 2020
Dynamic Hard Pruning of Neural Networks at the Edge of the Internet
Lorenzo Valerio
F. M. Nardini
A. Passarella
R. Perego
50
12
0
17 Nov 2020
Learning Efficient GANs for Image Translation via Differentiable Masks and co-Attention Distillation
Shaojie Li
Mingbao Lin
Yan Wang
Chia-Wen Lin
Ling Shao
Rongrong Ji
86
33
0
17 Nov 2020
LOss-Based SensiTivity rEgulaRization: towards deep sparse neural networks
Enzo Tartaglione
Andrea Bragagnolo
Attilio Fiandrotti
Marco Grangetto
ODL
UQCV
80
34
0
16 Nov 2020
Efficient Variational Inference for Sparse Deep Learning with Theoretical Guarantee
Jincheng Bai
Qifan Song
Guang Cheng
BDL
52
40
0
15 Nov 2020
Representing Deep Neural Networks Latent Space Geometries with Graphs
Carlos Lassance
Vincent Gripon
Antonio Ortega
AI4CE
58
15
0
14 Nov 2020
Using noise to probe recurrent neural network structure and prune synapses
Eli Moore
Rishidev Chaudhuri
46
6
0
14 Nov 2020
LEAN: graph-based pruning for convolutional neural networks by extracting longest chains
R. Schoonhoven
A. Hendriksen
D. Pelt
K. Batenburg
3DPC
44
4
0
13 Nov 2020
Testing the Genomic Bottleneck Hypothesis in Hebbian Meta-Learning
Rasmus Berg Palm
Elias Najarro
S. Risi
67
3
0
13 Nov 2020
Distributed Sparse SGD with Majority Voting
Kerem Ozfatura
Emre Ozfatura
Deniz Gunduz
FedML
76
4
0
12 Nov 2020
A Variational Infinite Mixture for Probabilistic Inverse Dynamics Learning
Hany Abdulsamad
Peter Nickl
Pascal Klink
Jan Peters
23
3
0
10 Nov 2020
Efficient and Transferable Adversarial Examples from Bayesian Neural Networks
Martin Gubri
Maxime Cordy
Mike Papadakis
Yves Le Traon
Koushik Sen
AAML
149
11
0
10 Nov 2020
Unwrapping The Black Box of Deep ReLU Networks: Interpretability, Diagnostics, and Simplification
Agus Sudjianto
William Knauth
Rahul Singh
Zebin Yang
Aijun Zhang
FAtt
66
46
0
08 Nov 2020
Gaussian Processes with Skewed Laplace Spectral Mixture Kernels for Long-term Forecasting
Kai Chen
Twan van Laarhoven
E. Marchiori
AI4TS
79
8
0
08 Nov 2020
Rethinking the Value of Transformer Components
Wenxuan Wang
Zhaopeng Tu
84
40
0
07 Nov 2020
Low-Complexity Models for Acoustic Scene Classification Based on Receptive Field Regularization and Frequency Damping
Khaled Koutini
Florian Henkel
Hamid Eghbalzadeh
Gerhard Widmer
103
9
0
05 Nov 2020
Observation Space Matters: Benchmark and Optimization Algorithm
J. Kim
Sehoon Ha
OOD
OffRL
49
11
0
02 Nov 2020
Sparsity-Control Ternary Weight Networks
Xiang Deng
Zhongfei Zhang
MQ
54
8
0
01 Nov 2020
Methods for Pruning Deep Neural Networks
S. Vadera
Salem Ameen
3DPC
73
130
0
31 Oct 2020
Greedy Optimization Provably Wins the Lottery: Logarithmic Number of Winning Tickets is Enough
Mao Ye
Lemeng Wu
Qiang Liu
64
17
0
29 Oct 2020
Bayesian Deep Learning via Subnetwork Inference
Erik A. Daxberger
Eric T. Nalisnick
J. Allingham
Javier Antorán
José Miguel Hernández-Lobato
UQCV
BDL
128
86
0
28 Oct 2020
A Bayesian Perspective on Training Speed and Model Selection
Clare Lyle
Lisa Schut
Binxin Ru
Y. Gal
Mark van der Wilk
99
24
0
27 Oct 2020
FastFormers: Highly Efficient Transformer Models for Natural Language Understanding
Young Jin Kim
Hany Awadalla
AI4CE
100
44
0
26 Oct 2020
Structural Prior Driven Regularized Deep Learning for Sonar Image Classification
Isaac D. Gerg
V. Monga
18
32
0
26 Oct 2020
ShiftAddNet: A Hardware-Inspired Deep Network
Haoran You
Xiaohan Chen
Yongan Zhang
Chaojian Li
Sicheng Li
Zihao Liu
Zhangyang Wang
Yingyan Lin
OOD
MQ
159
78
0
24 Oct 2020
On Convergence and Generalization of Dropout Training
Poorya Mianjy
R. Arora
130
30
0
23 Oct 2020
Brain-Inspired Learning on Neuromorphic Substrates
Friedemann Zenke
Emre Neftci
119
90
0
22 Oct 2020
Not all parameters are born equal: Attention is mostly what you need
Nikolay Bogoychev
MoE
52
7
0
22 Oct 2020
PHEW: Constructing Sparse Networks that Learn Fast and Generalize Well without Training Data
S. M. Patil
C. Dovrolis
75
18
0
22 Oct 2020
Mixed-Precision Embedding Using a Cache
J. Yang
Jianyu Huang
Jongsoo Park
P. T. P. Tang
Andrew Tulloch
113
37
0
21 Oct 2020
Analyzing the Source and Target Contributions to Predictions in Neural Machine Translation
Elena Voita
Rico Sennrich
Ivan Titov
87
86
0
21 Oct 2020
Learning to Embed Categorical Features without Embedding Tables for Recommendation
Wang-Cheng Kang
D. Cheng
Tiansheng Yao
Xinyang Yi
Ting-Li Chen
Lichan Hong
Ed H. Chi
LMTD
CML
DML
108
72
0
21 Oct 2020
Edge Bias in Federated Learning and its Solution by Buffered Knowledge Distillation
Sang-ho Lee
Kiyoon Yoo
Nojun Kwak
FedML
111
2
0
20 Oct 2020
Softer Pruning, Incremental Regularization
Linhang Cai
Zhulin An
Chuanguang Yang
Yongjun Xu
47
17
0
19 Oct 2020
Variational Capsule Encoder
Harish RaviPrakash
Syed Muhammad Anwar
Ulas Bagci
BDL
DRL
38
2
0
18 Oct 2020
Adaptive Dense-to-Sparse Paradigm for Pruning Online Recommendation System with Non-Stationary Data
Mao Ye
Dhruv Choudhary
Jiecao Yu
Ellie Wen
Zeliang Chen
Jiyan Yang
Jongsoo Park
Qiang Liu
A. Kejariwal
56
9
0
16 Oct 2020
An Approximation Algorithm for Optimal Subarchitecture Extraction
Adrian de Wynter
70
1
0
16 Oct 2020
Quantile regression with deep ReLU Networks: Estimators and minimax rates
Oscar Hernan Madrid Padilla
Wesley Tansey
Yanzhen Chen
327
29
0
16 Oct 2020
Layer-adaptive sparsity for the Magnitude-based Pruning
Jaeho Lee
Sejun Park
Sangwoo Mo
SungSoo Ahn
Jinwoo Shin
93
204
0
15 Oct 2020
Towards Accurate Quantization and Pruning via Data-free Knowledge Transfer
Chen Zhu
Zheng Xu
Ali Shafahi
Manli Shu
Amin Ghiasi
Tom Goldstein
MQ
55
3
0
14 Oct 2020
Training independent subnetworks for robust prediction
Marton Havasi
Rodolphe Jenatton
Stanislav Fort
Jeremiah Zhe Liu
Jasper Snoek
Balaji Lakshminarayanan
Andrew M. Dai
Dustin Tran
UQCV
OOD
106
213
0
13 Oct 2020
Pretrained Transformers for Text Ranking: BERT and Beyond
Jimmy J. Lin
Rodrigo Nogueira
Andrew Yates
VLM
387
628
0
13 Oct 2020
Embedded methods for feature selection in neural networks
K. VinayVarma
28
8
0
12 Oct 2020
Revisiting Neural Architecture Search
Anubhav Garg
Amit Kumar Saha
Debo Dutta
15
2
0
12 Oct 2020
Glance and Focus: a Dynamic Approach to Reducing Spatial Redundancy in Image Classification
Yulin Wang
Kangchen Lv
Rui Huang
Shiji Song
Le Yang
Gao Huang
3DH
49
150
0
11 Oct 2020
Advanced Dropout: A Model-free Methodology for Bayesian Dropout Optimization
Jiyang Xie
Zhanyu Ma
and Jianjun Lei
Guoqiang Zhang
Jing-Hao Xue
Zheng-Hua Tan
Jun Guo
BDL
26
46
0
11 Oct 2020
Previous
1
2
3
...
33
34
35
...
39
40
41
Next