ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.12770
  4. Cited By
COLT: Cyclic Overlapping Lottery Tickets for Faster Pruning of Convolutional Neural Networks

COLT: Cyclic Overlapping Lottery Tickets for Faster Pruning of Convolutional Neural Networks

24 December 2022
Md. Ismail Hossain
Mohammed Rakib
M. M. L. Elahi
Nabeel Mohammed
Shafin Rahman
ArXivPDFHTML

Papers citing "COLT: Cyclic Overlapping Lottery Tickets for Faster Pruning of Convolutional Neural Networks"

40 / 40 papers shown
Title
3D Point Cloud Network Pruning: When Some Weights Do not Matter
3D Point Cloud Network Pruning: When Some Weights Do not Matter
Amrijit Biswas
M. Hossain
M. M. L. Elahi
A. Cheraghian
Fuad Rahman
Nabeel Mohammed
Shafin Rahman
3DPC
41
1
0
26 Aug 2024
FairGRAPE: Fairness-aware GRAdient Pruning mEthod for Face Attribute
  Classification
FairGRAPE: Fairness-aware GRAdient Pruning mEthod for Face Attribute Classification
Xiao-Ze Lin
Seungbae Kim
Jungseock Joo
CVBM
57
40
0
22 Jul 2022
Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win
  the Jackpot?
Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?
Xiaolong Ma
Geng Yuan
Xuan Shen
Tianlong Chen
Xuxi Chen
...
Ning Liu
Minghai Qin
Sijia Liu
Zhangyang Wang
Yanzhi Wang
118
63
0
01 Jul 2021
Anchor Pruning for Object Detection
Anchor Pruning for Object Detection
Maxim Bonnaerens
Matthias Anton Freiberger
J. Dambre
ObjD
3DPC
11
14
0
01 Apr 2021
Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not?
Lottery Ticket Preserves Weight Correlation: Is It Desirable or Not?
Ning Liu
Geng Yuan
Zhengping Che
Xuan Shen
Xiaolong Ma
Qing Jin
Jian Ren
Jian Tang
Sijia Liu
Yanzhi Wang
60
31
0
19 Feb 2021
Sparsity in Deep Learning: Pruning and growth for efficient inference
  and training in neural networks
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler
Dan Alistarh
Tal Ben-Nun
Nikoli Dryden
Alexandra Peste
MQ
243
703
0
31 Jan 2021
Pruning neural networks without any data by iteratively conserving
  synaptic flow
Pruning neural networks without any data by iteratively conserving synaptic flow
Hidenori Tanaka
D. Kunin
Daniel L. K. Yamins
Surya Ganguli
104
636
0
09 Jun 2020
Linear Mode Connectivity and the Lottery Ticket Hypothesis
Linear Mode Connectivity and the Lottery Ticket Hypothesis
Jonathan Frankle
Gintare Karolina Dziugaite
Daniel M. Roy
Michael Carbin
MoMe
125
611
0
11 Dec 2019
Quantization Networks
Quantization Networks
Jiwei Yang
Xu Shen
Jun Xing
Xinmei Tian
Houqiang Li
Bing Deng
Jianqiang Huang
Xiansheng Hua
MQ
53
339
0
21 Nov 2019
What Do Compressed Deep Neural Networks Forget?
What Do Compressed Deep Neural Networks Forget?
Sara Hooker
Aaron Courville
Gregory Clark
Yann N. Dauphin
Andrea Frome
39
183
0
13 Nov 2019
Implicit Regularization for Optimal Sparse Recovery
Implicit Regularization for Optimal Sparse Recovery
Tomas Vaskevicius
Varun Kanade
Patrick Rebeschini
32
100
0
11 Sep 2019
The Generalization-Stability Tradeoff In Neural Network Pruning
The Generalization-Stability Tradeoff In Neural Network Pruning
Brian Bartoldson
Ari S. Morcos
Adrian Barbu
G. Erlebacher
58
73
0
09 Jun 2019
One ticket to win them all: generalizing lottery ticket initializations
  across datasets and optimizers
One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers
Ari S. Morcos
Haonan Yu
Michela Paganini
Yuandong Tian
39
228
0
06 Jun 2019
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask
Hattie Zhou
Janice Lan
Rosanne Liu
J. Yosinski
UQCV
43
386
0
03 May 2019
The State of Sparsity in Deep Neural Networks
The State of Sparsity in Deep Neural Networks
Trevor Gale
Erich Elsen
Sara Hooker
99
755
0
25 Feb 2019
Learning and Generalization in Overparameterized Neural Networks, Going
  Beyond Two Layers
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
Zeyuan Allen-Zhu
Yuanzhi Li
Yingyu Liang
MLT
111
769
0
12 Nov 2018
A Convergence Theory for Deep Learning via Over-Parameterization
A Convergence Theory for Deep Learning via Over-Parameterization
Zeyuan Allen-Zhu
Yuanzhi Li
Zhao Song
AI4CE
ODL
175
1,457
0
09 Nov 2018
Functionality-Oriented Convolutional Filter Pruning
Functionality-Oriented Convolutional Filter Pruning
Zhuwei Qin
Fuxun Yu
Chenchen Liu
Xiang Chen
37
4
0
12 Oct 2018
SNIP: Single-shot Network Pruning based on Connection Sensitivity
SNIP: Single-shot Network Pruning based on Connection Sensitivity
Namhoon Lee
Thalaiyasingam Ajanthan
Philip Torr
VLM
207
1,190
0
04 Oct 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLT
ODL
135
1,261
0
04 Oct 2018
A Survey on Deep Transfer Learning
A Survey on Deep Transfer Learning
Chuanqi Tan
F. Sun
Tao Kong
Wenchang Zhang
Chao Yang
Chunfang Liu
54
2,575
0
06 Aug 2018
Do Better ImageNet Models Transfer Better?
Do Better ImageNet Models Transfer Better?
Simon Kornblith
Jonathon Shlens
Quoc V. Le
OOD
MLT
134
1,319
0
23 May 2018
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle
Michael Carbin
164
3,433
0
09 Mar 2018
On the Power of Over-parametrization in Neural Networks with Quadratic
  Activation
On the Power of Over-parametrization in Neural Networks with Quadratic Activation
S. Du
Jason D. Lee
95
268
0
03 Mar 2018
MobileNetV2: Inverted Residuals and Linear Bottlenecks
MobileNetV2: Inverted Residuals and Linear Bottlenecks
Mark Sandler
Andrew G. Howard
Menglong Zhu
A. Zhmoginov
Liang-Chieh Chen
129
19,124
0
13 Jan 2018
To prune, or not to prune: exploring the efficacy of pruning for model
  compression
To prune, or not to prune: exploring the efficacy of pruning for model compression
Michael Zhu
Suyog Gupta
130
1,262
0
05 Oct 2017
Learning Transferable Architectures for Scalable Image Recognition
Learning Transferable Architectures for Scalable Image Recognition
Barret Zoph
Vijay Vasudevan
Jonathon Shlens
Quoc V. Le
139
5,577
0
21 Jul 2017
ThiNet: A Filter Level Pruning Method for Deep Neural Network
  Compression
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression
Jian-Hao Luo
Jianxin Wu
Weiyao Lin
35
1,751
0
20 Jul 2017
Variational Dropout Sparsifies Deep Neural Networks
Variational Dropout Sparsifies Deep Neural Networks
Dmitry Molchanov
Arsenii Ashukha
Dmitry Vetrov
BDL
92
825
0
19 Jan 2017
Designing Energy-Efficient Convolutional Neural Networks using
  Energy-Aware Pruning
Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning
Tien-Ju Yang
Yu-hsin Chen
Vivienne Sze
3DV
68
737
0
16 Nov 2016
Pruning Filters for Efficient ConvNets
Pruning Filters for Efficient ConvNets
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
3DPC
160
3,676
0
31 Aug 2016
Dynamic Network Surgery for Efficient DNNs
Dynamic Network Surgery for Efficient DNNs
Yiwen Guo
Anbang Yao
Yurong Chen
59
1,054
0
16 Aug 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
1.3K
192,638
0
10 Dec 2015
Learning both Weights and Connections for Efficient Neural Networks
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
224
6,628
0
08 Jun 2015
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal
  Networks
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
Shaoqing Ren
Kaiming He
Ross B. Girshick
Jian Sun
AIMat
ObjD
404
61,900
0
04 Jun 2015
Distilling the Knowledge in a Neural Network
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
228
19,448
0
09 Mar 2015
In Search of the Real Inductive Bias: On the Role of Implicit
  Regularization in Deep Learning
In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning
Behnam Neyshabur
Ryota Tomioka
Nathan Srebro
AI4CE
63
655
0
20 Dec 2014
How transferable are features in deep neural networks?
How transferable are features in deep neural networks?
J. Yosinski
Jeff Clune
Yoshua Bengio
Hod Lipson
OOD
138
8,309
0
06 Nov 2014
Very Deep Convolutional Networks for Large-Scale Image Recognition
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAtt
MDE
891
99,991
0
04 Sep 2014
ImageNet Large Scale Visual Recognition Challenge
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
1.1K
39,383
0
01 Sep 2014
1