ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.03635
  4. Cited By
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
v1v2v3v4v5 (latest)

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

9 March 2018
Jonathan Frankle
Michael Carbin
ArXiv (abs)PDFHTML

Papers citing "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"

50 / 2,031 papers shown
Title
Neural Network Module Decomposition and Recomposition
Neural Network Module Decomposition and Recomposition
Hiroaki Kingetsu
Kenichi Kobayashi
Taiji Suzuki
85
11
0
25 Dec 2021
GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design
GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design
Sung Une Lee
Boming Xia
Yongan Zhang
Ang Li
Yingyan Lin
GNN
139
52
0
22 Dec 2021
Federated Dynamic Sparse Training: Computing Less, Communicating Less,
  Yet Learning Better
Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better
Sameer Bibikar
H. Vikalo
Zhangyang Wang
Xiaohan Chen
FedML
86
104
0
18 Dec 2021
Automated Deep Learning: Neural Architecture Search Is Not the End
Automated Deep Learning: Neural Architecture Search Is Not the End
Xuanyi Dong
D. Kedziora
Katarzyna Musial
Bogdan Gabrys
126
27
0
16 Dec 2021
Visualizing the Loss Landscape of Winning Lottery Tickets
Visualizing the Loss Landscape of Winning Lottery Tickets
Robert Bain
UQCV
70
3
0
16 Dec 2021
Pruning Coherent Integrated Photonic Neural Networks Using the Lottery
  Ticket Hypothesis
Pruning Coherent Integrated Photonic Neural Networks Using the Lottery Ticket Hypothesis
Sanmitra Banerjee
Mahdi Nikdast
S. Pasricha
Krishnendu Chakrabarty
65
10
0
14 Dec 2021
SNF: Filter Pruning via Searching the Proper Number of Filters
SNF: Filter Pruning via Searching the Proper Number of Filters
Pengkun Liu
Yaru Yue
Yanjun Guo
Xingxiang Tao
Xiaoguang Zhou
3DPC
49
0
0
14 Dec 2021
From Dense to Sparse: Contrastive Pruning for Better Pre-trained
  Language Model Compression
From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression
Runxin Xu
Fuli Luo
Chengyu Wang
Baobao Chang
Jun Huang
Songfang Huang
Fei Huang
VLM
64
26
0
14 Dec 2021
Towards a Unified Foundation Model: Jointly Pre-Training Transformers on
  Unpaired Images and Text
Towards a Unified Foundation Model: Jointly Pre-Training Transformers on Unpaired Images and Text
Qing Li
Boqing Gong
Huayu Chen
Dan Kondratyuk
Xianzhi Du
Ming-Hsuan Yang
Matthew A. Brown
ViT
49
17
0
14 Dec 2021
On the Compression of Natural Language Models
On the Compression of Natural Language Models
S. Damadi
34
0
0
13 Dec 2021
Achieving Low Complexity Neural Decoders via Iterative Pruning
Achieving Low Complexity Neural Decoders via Iterative Pruning
Vikrant Malik
Rohan Ghosh
Mehul Motani
38
2
0
11 Dec 2021
Effective dimension of machine learning models
Effective dimension of machine learning models
Amira Abbas
David Sutter
Alessio Figalli
Stefan Woerner
121
18
0
09 Dec 2021
SHRIMP: Sparser Random Feature Models via Iterative Magnitude Pruning
SHRIMP: Sparser Random Feature Models via Iterative Magnitude Pruning
Yuege Xie
Bobby Shi
Hayden Schaeffer
Rachel A. Ward
125
10
0
07 Dec 2021
i-SpaSP: Structured Neural Pruning via Sparse Signal Recovery
i-SpaSP: Structured Neural Pruning via Sparse Signal Recovery
Cameron R. Wolfe
Anastasios Kyrillidis
55
1
0
07 Dec 2021
Enhanced Exploration in Neural Feature Selection for Deep Click-Through
  Rate Prediction Models via Ensemble of Gating Layers
Enhanced Exploration in Neural Feature Selection for Deep Click-Through Rate Prediction Models via Ensemble of Gating Layers
L. Guan
Xia Xiao
Ming-yue Chen
Youlong Cheng
49
1
0
07 Dec 2021
Equal Bits: Enforcing Equally Distributed Binary Network Weights
Equal Bits: Enforcing Equally Distributed Binary Network Weights
Yun-qiang Li
S. Pintea
Jan van Gemert
MQ
89
15
0
02 Dec 2021
Putting 3D Spatially Sparse Networks on a Diet
Putting 3D Spatially Sparse Networks on a Diet
Junha Lee
Chris Choy
Jaesik Park
3DV
77
3
0
02 Dec 2021
Training BatchNorm Only in Neural Architecture Search and Beyond
Training BatchNorm Only in Neural Architecture Search and Beyond
Yichen Zhu
Jie Du
Yuqin Zhu
Yi Wang
Zhicai Ou
Feifei Feng
Jian Tang
84
1
0
01 Dec 2021
Pixelated Butterfly: Simple and Efficient Sparse training for Neural
  Network Models
Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models
Tri Dao
Beidi Chen
Kaizhao Liang
Jiaming Yang
Zhao Song
Atri Rudra
Christopher Ré
133
79
0
30 Nov 2021
Leveraging The Topological Consistencies of Learning in Deep Neural
  Networks
Leveraging The Topological Consistencies of Learning in Deep Neural Networks
Stuart Synakowski
Fabian Benitez-Quiroz
Aleix M. Martinez
21
0
0
30 Nov 2021
Embedding Principle: a hierarchical structure of loss landscape of deep
  neural networks
Embedding Principle: a hierarchical structure of loss landscape of deep neural networks
Yaoyu Zhang
Yuqing Li
Zhongwang Zhang
Yaoyu Zhang
Z. Xu
82
23
0
30 Nov 2021
How Well Do Sparse Imagenet Models Transfer?
How Well Do Sparse Imagenet Models Transfer?
Eugenia Iofinova
Alexandra Peste
Mark Kurtz
Dan Alistarh
125
41
0
26 Nov 2021
Intrinsic Dimension, Persistent Homology and Generalization in Neural
  Networks
Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks
Tolga Birdal
Aaron Lou
Leonidas Guibas
Umut cSimcsekli
73
65
0
25 Nov 2021
Predicting the success of Gradient Descent for a particular
  Dataset-Architecture-Initialization (DAI)
Predicting the success of Gradient Descent for a particular Dataset-Architecture-Initialization (DAI)
Umang Jain
H. G. Ramaswamy
AI4CE
34
1
0
25 Nov 2021
Robust Object Detection with Multi-input Multi-output Faster R-CNN
Robust Object Detection with Multi-input Multi-output Faster R-CNN
Sebastian Cygert
A. Czyżewski
ObjD
32
2
0
25 Nov 2021
Understanding the Dynamics of DNNs Using Graph Modularity
Understanding the Dynamics of DNNs Using Graph Modularity
Yao Lu
Wen Yang
Yunzhe Zhang
Zuohui Chen
Jinyin Chen
Qi Xuan
Zhen Wang
Xiaoniu Yang
88
12
0
24 Nov 2021
Hidden-Fold Networks: Random Recurrent Residuals Using Sparse Supermasks
Hidden-Fold Networks: Random Recurrent Residuals Using Sparse Supermasks
Ángel López García-Arias
Masanori Hashimoto
Masato Motomura
Jaehoon Yu
61
5
0
24 Nov 2021
Pruning Self-attentions into Convolutional Layers in Single Path
Pruning Self-attentions into Convolutional Layers in Single Path
Haoyu He
Jianfei Cai
Jing Liu
Zizheng Pan
Jing Zhang
Dacheng Tao
Bohan Zhuang
ViT
95
40
0
23 Nov 2021
Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time
  Mobile Acceleration
Automatic Mapping of the Best-Suited DNN Pruning Schemes for Real-Time Mobile Acceleration
Yifan Gong
Geng Yuan
Zheng Zhan
Wei Niu
Zhengang Li
...
Sijia Liu
Bin Ren
Xue Lin
Xulong Tang
Yanzhi Wang
66
10
0
22 Nov 2021
Neural Fields in Visual Computing and Beyond
Neural Fields in Visual Computing and Beyond
Yiheng Xie
Towaki Takikawa
Shunsuke Saito
Or Litany
Shiqin Yan
Numair Khan
Federico Tombari
James Tompkin
Vincent Sitzmann
Srinath Sridhar
3DH
228
637
0
22 Nov 2021
DyTox: Transformers for Continual Learning with DYnamic TOken eXpansion
DyTox: Transformers for Continual Learning with DYnamic TOken eXpansion
Arthur Douillard
Alexandre Ramé
Guillaume Couairon
Matthieu Cord
CLL
149
313
0
22 Nov 2021
Plant ñ' Seek: Can You Find the Winning Ticket?
Plant ñ' Seek: Can You Find the Winning Ticket?
Jonas Fischer
R. Burkholz
81
21
0
22 Nov 2021
On the Existence of Universal Lottery Tickets
On the Existence of Universal Lottery Tickets
R. Burkholz
Nilanjana Laha
Rajarshi Mukherjee
Alkis Gotovos
UQCV
82
33
0
22 Nov 2021
Can depth-adaptive BERT perform better on binary classification tasks
Can depth-adaptive BERT perform better on binary classification tasks
Jing Fan
Xin Zhang
Sheng Zhang
Yan Pan
Lixiang Guo
MQ
38
0
0
22 Nov 2021
Toward Compact Parameter Representations for Architecture-Agnostic
  Neural Network Compression
Toward Compact Parameter Representations for Architecture-Agnostic Neural Network Compression
Yuezhou Sun
Wenlong Zhao
Lijun Zhang
Xiao Liu
Hui Guan
Matei A. Zaharia
77
0
0
19 Nov 2021
Training Neural Networks with Fixed Sparse Masks
Training Neural Networks with Fixed Sparse Masks
Yi-Lin Sung
Varun Nair
Colin Raffel
FedML
106
209
0
18 Nov 2021
deepstruct -- linking deep learning and graph theory
deepstruct -- linking deep learning and graph theory
Julian Stier
Michael Granitzer
GNNPINN
118
2
0
12 Nov 2021
Edge-Cloud Polarization and Collaboration: A Comprehensive Survey for AI
Edge-Cloud Polarization and Collaboration: A Comprehensive Survey for AI
Jiangchao Yao
Shengyu Zhang
Yang Yao
Feng Wang
Jianxin Ma
...
Kun Kuang
Chao-Xiang Wu
Leilei Gan
Jingren Zhou
Hongxia Yang
149
103
0
11 Nov 2021
Prune Once for All: Sparse Pre-Trained Language Models
Prune Once for All: Sparse Pre-Trained Language Models
Ofir Zafrir
Ariel Larey
Guy Boudoukh
Haihao Shen
Moshe Wasserblat
VLM
70
85
0
10 Nov 2021
Can Information Flows Suggest Targets for Interventions in Neural
  Circuits?
Can Information Flows Suggest Targets for Interventions in Neural Circuits?
Praveen Venkatesh
Sanghamitra Dutta
Neil Mehta
P. Grover
AAML
70
8
0
09 Nov 2021
Next2You: Robust Copresence Detection Based on Channel State Information
Next2You: Robust Copresence Detection Based on Channel State Information
Mikhail Fomichev
L. F. Abanto-Leon
Maximilian Stiegler
Alejandro Molina
Jakob Link
M. Hollick
24
6
0
09 Nov 2021
Revisiting Methods for Finding Influential Examples
Revisiting Methods for Finding Influential Examples
Karthikeyan K
Anders Søgaard
TDI
71
31
0
08 Nov 2021
A Survey on Green Deep Learning
A Survey on Green Deep Learning
Jingjing Xu
Wangchunshu Zhou
Zhiyi Fu
Hao Zhou
Lei Li
VLM
203
84
0
08 Nov 2021
How I Learned to Stop Worrying and Love Retraining
How I Learned to Stop Worrying and Love Retraining
Max Zimmer
Christoph Spiegel
Sebastian Pokutta
CLL
88
9
0
01 Nov 2021
Learning Pruned Structure and Weights Simultaneously from Scratch: an
  Attention based Approach
Learning Pruned Structure and Weights Simultaneously from Scratch: an Attention based Approach
Qisheng He
Weisong Shi
Ming Dong
56
3
0
01 Nov 2021
You are caught stealing my winning lottery ticket! Making a lottery
  ticket claim its ownership
You are caught stealing my winning lottery ticket! Making a lottery ticket claim its ownership
Xuxi Chen
Tianlong Chen
Zhenyu Zhang
Zhangyang Wang
WIGM
77
23
0
30 Oct 2021
Gabor filter incorporated CNN for compression
Gabor filter incorporated CNN for compression
Akihiro Imamura
N. Arizumi
CVBM
52
2
0
29 Oct 2021
ADDS: Adaptive Differentiable Sampling for Robust Multi-Party Learning
ADDS: Adaptive Differentiable Sampling for Robust Multi-Party Learning
Maoguo Gong
Yuan Gao
Yue Wu
•. A. K. Qin
FedMLOOD
16
1
0
29 Oct 2021
NxMTransformer: Semi-Structured Sparsification for Natural Language
  Understanding via ADMM
NxMTransformer: Semi-Structured Sparsification for Natural Language Understanding via ADMM
Connor Holmes
Minjia Zhang
Yuxiong He
Bo Wu
60
20
0
28 Oct 2021
RGP: Neural Network Pruning through Its Regular Graph Structure
RGP: Neural Network Pruning through Its Regular Graph Structure
Zhuangzhi Chen
Jingyang Xiang
Yao Lu
Qi Xuan
Xiaoniu Yang
55
1
0
28 Oct 2021
Previous
123...262728...394041
Next