ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.03635
  4. Cited By
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

9 March 2018
Jonathan Frankle
Michael Carbin
ArXivPDFHTML

Papers citing "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"

50 / 746 papers shown
Title
SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance
SInGE: Sparsity via Integrated Gradients Estimation of Neuron Relevance
Edouard Yvinec
Arnaud Dapogny
Matthieu Cord
Kévin Bailly
47
9
0
08 Jul 2022
Models Out of Line: A Fourier Lens on Distribution Shift Robustness
Models Out of Line: A Fourier Lens on Distribution Shift Robustness
Sara Fridovich-Keil
Brian Bartoldson
James Diffenderfer
B. Kailkhura
P. Bremer
OOD
56
0
0
08 Jul 2022
Scaling Private Deep Learning with Low-Rank and Sparse Gradients
Scaling Private Deep Learning with Low-Rank and Sparse Gradients
Ryuichi Ito
Seng Pei Liew
Tsubasa Takahashi
Yuya Sasaki
Makoto Onizuka
30
1
0
06 Jul 2022
Exploring Lottery Ticket Hypothesis in Spiking Neural Networks
Exploring Lottery Ticket Hypothesis in Spiking Neural Networks
Youngeun Kim
Yuhang Li
Hyoungseob Park
Yeshwanth Venkatesha
Ruokai Yin
Priyadarshini Panda
37
46
0
04 Jul 2022
PrUE: Distilling Knowledge from Sparse Teacher Networks
PrUE: Distilling Knowledge from Sparse Teacher Networks
Shaopu Wang
Xiaojun Chen
Mengzhen Kou
Jinqiao Shi
30
2
0
03 Jul 2022
DRESS: Dynamic REal-time Sparse Subnets
DRESS: Dynamic REal-time Sparse Subnets
Zhongnan Qu
Syed Shakib Sarwar
Xin Dong
Yuecheng Li
Huseyin Ekin Sumbul
B. D. Salvo
3DH
23
1
0
01 Jul 2022
Cut Inner Layers: A Structured Pruning Strategy for Efficient U-Net GANs
Cut Inner Layers: A Structured Pruning Strategy for Efficient U-Net GANs
Bo-Kyeong Kim
Shinkook Choi
Hancheol Park
26
4
0
29 Jun 2022
Data Redaction from Pre-trained GANs
Data Redaction from Pre-trained GANs
Zhifeng Kong
Kamalika Chaudhuri
65
16
0
29 Jun 2022
Deep Neural Networks pruning via the Structured Perspective
  Regularization
Deep Neural Networks pruning via the Structured Perspective Regularization
M. Cacciola
A. Frangioni
Xinlin Li
Andrea Lodi
3DPC
36
5
0
28 Jun 2022
PLATON: Pruning Large Transformer Models with Upper Confidence Bound of
  Weight Importance
PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance
Qingru Zhang
Simiao Zuo
Chen Liang
Alexander Bukharin
Pengcheng He
Weizhu Chen
T. Zhao
27
78
0
25 Jun 2022
Arithmetic Circuits, Structured Matrices and (not so) Deep Learning
Arithmetic Circuits, Structured Matrices and (not so) Deep Learning
Atri Rudra
21
1
0
24 Jun 2022
Understanding the effect of sparsity on neural networks robustness
Understanding the effect of sparsity on neural networks robustness
Lukas Timpl
R. Entezari
Hanie Sedghi
Behnam Neyshabur
O. Saukh
51
12
0
22 Jun 2022
Winning the Lottery Ahead of Time: Efficient Early Network Pruning
Winning the Lottery Ahead of Time: Efficient Early Network Pruning
John Rachwan
Daniel Zügner
Bertrand Charpentier
Simon Geisler
Morgane Ayle
Stephan Günnemann
32
24
0
21 Jun 2022
Fast Lossless Neural Compression with Integer-Only Discrete Flows
Fast Lossless Neural Compression with Integer-Only Discrete Flows
Siyu Wang
Jianfei Chen
Chongxuan Li
Jun Zhu
Bo Zhang
MQ
26
7
0
17 Jun 2022
Sparse Double Descent: Where Network Pruning Aggravates Overfitting
Sparse Double Descent: Where Network Pruning Aggravates Overfitting
Zhengqi He
Zeke Xie
Quanzhi Zhu
Zengchang Qin
86
27
0
17 Jun 2022
Can pruning improve certified robustness of neural networks?
Can pruning improve certified robustness of neural networks?
Zhangheng Li
Tianlong Chen
Linyi Li
Bo Li
Zhangyang Wang
AAML
24
12
0
15 Jun 2022
Zeroth-Order Topological Insights into Iterative Magnitude Pruning
Zeroth-Order Topological Insights into Iterative Magnitude Pruning
Aishwarya H. Balwani
J. Krzyston
34
2
0
14 Jun 2022
LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer
  Learning
LST: Ladder Side-Tuning for Parameter and Memory Efficient Transfer Learning
Yi-Lin Sung
Jaemin Cho
Joey Tianyi Zhou
VLM
26
237
0
13 Jun 2022
From Perception to Programs: Regularize, Overparameterize, and Amortize
From Perception to Programs: Regularize, Overparameterize, and Amortize
Hao Tang
Kevin Ellis
NAI
31
10
0
13 Jun 2022
PAC-Net: A Model Pruning Approach to Inductive Transfer Learning
PAC-Net: A Model Pruning Approach to Inductive Transfer Learning
Sanghoon Myung
I. Huh
Wonik Jang
Jae Myung Choe
Jisu Ryu
Daesin Kim
Kee-Eung Kim
C. Jeong
32
13
0
12 Jun 2022
A General Framework For Proving The Equivariant Strong Lottery Ticket
  Hypothesis
A General Framework For Proving The Equivariant Strong Lottery Ticket Hypothesis
Damien Ferbach
Christos Tsirigotis
Gauthier Gidel
Avishek
A. Bose
43
16
0
09 Jun 2022
High-dimensional limit theorems for SGD: Effective dynamics and critical
  scaling
High-dimensional limit theorems for SGD: Effective dynamics and critical scaling
Gerard Ben Arous
Reza Gheissari
Aukosh Jagannath
67
58
0
08 Jun 2022
Recall Distortion in Neural Network Pruning and the Undecayed Pruning
  Algorithm
Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm
Aidan Good
Jia-Huei Lin
Hannah Sieg
Mikey Ferguson
Xin Yu
Shandian Zhe
J. Wieczorek
Thiago Serra
42
11
0
07 Jun 2022
Canonical convolutional neural networks
Canonical convolutional neural networks
Lokesh Veeramacheneni
Moritz Wolter
Reinhard Klein
Jochen Garcke
26
3
0
03 Jun 2022
ViNNPruner: Visual Interactive Pruning for Deep Learning
ViNNPruner: Visual Interactive Pruning for Deep Learning
U. Schlegel
Samuel Schiegg
Daniel A. Keim
VLM
32
2
0
31 May 2022
Gator: Customizable Channel Pruning of Neural Networks with Gating
Gator: Customizable Channel Pruning of Neural Networks with Gating
E. Passov
E. David
N. Netanyahu
AAML
45
0
0
30 May 2022
FlashAttention: Fast and Memory-Efficient Exact Attention with
  IO-Awareness
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Tri Dao
Daniel Y. Fu
Stefano Ermon
Atri Rudra
Christopher Ré
VLM
116
2,061
0
27 May 2022
Diverse Lottery Tickets Boost Ensemble from a Single Pretrained Model
Diverse Lottery Tickets Boost Ensemble from a Single Pretrained Model
Sosuke Kobayashi
Shun Kiyono
Jun Suzuki
Kentaro Inui
MoMe
37
7
0
24 May 2022
Semi-Parametric Inducing Point Networks and Neural Processes
Semi-Parametric Inducing Point Networks and Neural Processes
R. Rastogi
Yair Schiff
Alon Hacohen
Zhaozhi Li
I-Hsiang Lee
Yuntian Deng
M. Sabuncu
Volodymyr Kuleshov
3DPC
29
6
0
24 May 2022
OPQ: Compressing Deep Neural Networks with One-shot Pruning-Quantization
OPQ: Compressing Deep Neural Networks with One-shot Pruning-Quantization
Peng Hu
Xi Peng
Erik Cambria
M. Aly
Jie Lin
MQ
57
59
0
23 May 2022
What Do Compressed Multilingual Machine Translation Models Forget?
What Do Compressed Multilingual Machine Translation Models Forget?
Alireza Mohammadshahi
Vassilina Nikoulina
Alexandre Berard
Caroline Brun
James Henderson
Laurent Besacier
AI4CE
49
9
0
22 May 2022
Sharp asymptotics on the compression of two-layer neural networks
Sharp asymptotics on the compression of two-layer neural networks
Mohammad Hossein Amani
Simone Bombari
Marco Mondelli
Rattana Pukdee
Stefano Rini
MLT
27
0
0
17 May 2022
Perspectives on Incorporating Expert Feedback into Model Updates
Perspectives on Incorporating Expert Feedback into Model Updates
Valerie Chen
Umang Bhatt
Hoda Heidari
Adrian Weller
Ameet Talwalkar
40
11
0
13 May 2022
Revisiting Random Channel Pruning for Neural Network Compression
Revisiting Random Channel Pruning for Neural Network Compression
Yawei Li
Kamil Adamczewski
Wen Li
Shuhang Gu
Radu Timofte
Luc Van Gool
37
84
0
11 May 2022
Neural Architecture Search using Property Guided Synthesis
Neural Architecture Search using Property Guided Synthesis
Charles Jin
P. Phothilimthana
Sudip Roy
27
6
0
08 May 2022
Convolutional and Residual Networks Provably Contain Lottery Tickets
Convolutional and Residual Networks Provably Contain Lottery Tickets
R. Burkholz
UQCV
MLT
42
13
0
04 May 2022
Most Activation Functions Can Win the Lottery Without Excessive Depth
Most Activation Functions Can Win the Lottery Without Excessive Depth
R. Burkholz
MLT
79
18
0
04 May 2022
Resource-efficient domain adaptive pre-training for medical images
Resource-efficient domain adaptive pre-training for medical images
Y. Mehmood
U. I. Bajwa
Xianfang Sun
35
1
0
28 Apr 2022
Machines of finite depth: towards a formalization of neural networks
Machines of finite depth: towards a formalization of neural networks
Pietro Vertechi
M. Bergomi
PINN
29
2
0
27 Apr 2022
Federated Progressive Sparsification (Purge, Merge, Tune)+
Federated Progressive Sparsification (Purge, Merge, Tune)+
Dimitris Stripelis
Umang Gupta
Greg Ver Steeg
J. Ambite
FedML
28
9
0
26 Apr 2022
Attentive Fine-Grained Structured Sparsity for Image Restoration
Attentive Fine-Grained Structured Sparsity for Image Restoration
Junghun Oh
Heewon Kim
Seungjun Nah
Chee Hong
Jonghyun Choi
Kyoung Mu Lee
32
18
0
26 Apr 2022
Receding Neuron Importances for Structured Pruning
Receding Neuron Importances for Structured Pruning
Mihai Suteu
Yike Guo
29
1
0
13 Apr 2022
SRMD: Sparse Random Mode Decomposition
SRMD: Sparse Random Mode Decomposition
Nicholas Richardson
Hayden Schaeffer
Giang Tran
29
11
0
12 Apr 2022
Machine Learning and Deep Learning -- A review for Ecologists
Machine Learning and Deep Learning -- A review for Ecologists
Maximilian Pichler
F. Hartig
47
127
0
11 Apr 2022
LilNetX: Lightweight Networks with EXtreme Model Compression and
  Structured Sparsification
LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification
Sharath Girish
Kamal Gupta
Saurabh Singh
Abhinav Shrivastava
40
11
0
06 Apr 2022
Adversarial Robustness through the Lens of Convolutional Filters
Adversarial Robustness through the Lens of Convolutional Filters
Paul Gavrikov
J. Keuper
40
15
0
05 Apr 2022
Bimodal Distributed Binarized Neural Networks
Bimodal Distributed Binarized Neural Networks
T. Rozen
Moshe Kimhi
Brian Chmiel
A. Mendelson
Chaim Baskin
MQ
72
4
0
05 Apr 2022
Dynamic Focus-aware Positional Queries for Semantic Segmentation
Dynamic Focus-aware Positional Queries for Semantic Segmentation
Haoyu He
Jianfei Cai
Zizheng Pan
Jing Liu
Jing Zhang
Dacheng Tao
Bohan Zhuang
34
17
0
04 Apr 2022
Monarch: Expressive Structured Matrices for Efficient and Accurate
  Training
Monarch: Expressive Structured Matrices for Efficient and Accurate Training
Tri Dao
Beidi Chen
N. Sohoni
Arjun D Desai
Michael Poli
Jessica Grogan
Alexander Liu
Aniruddh Rao
Atri Rudra
Christopher Ré
35
87
0
01 Apr 2022
Structured Pruning Learns Compact and Accurate Models
Structured Pruning Learns Compact and Accurate Models
Mengzhou Xia
Zexuan Zhong
Danqi Chen
VLM
24
180
0
01 Apr 2022
Previous
123...789...131415
Next