ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.17943
  4. Cited By
LayerCollapse: Adaptive compression of neural networks
v1v2 (latest)

LayerCollapse: Adaptive compression of neural networks

29 November 2023
Soheil Zibakhsh Shabgahi
Mohammad Soheil Shariff
F. Koushanfar
    AI4CE
ArXiv (abs)PDFHTML

Papers citing "LayerCollapse: Adaptive compression of neural networks"

31 / 31 papers shown
Title
DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
Y. Fu
Haichuan Yang
Jiayi Yuan
Meng Li
Cheng Wan
Raghuraman Krishnamoorthi
Vikas Chandra
Yingyan Lin
75
19
0
02 Jun 2022
Selective Network Linearization for Efficient Private Inference
Selective Network Linearization for Efficient Private Inference
Minsu Cho
Ameya Joshi
S. Garg
Brandon Reagen
Chinmay Hegde
68
43
0
04 Feb 2022
Pruning-aware Sparse Regularization for Network Pruning
Pruning-aware Sparse Regularization for Network Pruning
Nanfei Jiang
Xu Zhao
Chaoyang Zhao
Yongqi An
Ming Tang
Jinqiao Wang
3DPC
48
12
0
18 Jan 2022
Avoiding Overfitting: A Survey on Regularization Methods for
  Convolutional Neural Networks
Avoiding Overfitting: A Survey on Regularization Methods for Convolutional Neural Networks
C. F. G. Santos
João Paulo Papa
67
227
0
10 Jan 2022
MLP-Mixer: An all-MLP Architecture for Vision
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
423
2,674
0
04 May 2021
DeepReDuce: ReLU Reduction for Fast Private Inference
DeepReDuce: ReLU Reduction for Fast Private Inference
N. Jha
Zahra Ghodsi
S. Garg
Brandon Reagen
77
91
0
02 Mar 2021
MaxDropout: Deep Neural Network Regularization Based on Maximum Output
  Values
MaxDropout: Deep Neural Network Regularization Based on Maximum Output Values
C. F. G. Santos
Danilo Colombo
Mateus Roder
João Paulo Papa
38
15
0
27 Jul 2020
Exploring the Limits of Transfer Learning with a Unified Text-to-Text
  Transformer
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel
Noam M. Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
AIMat
442
20,181
0
23 Oct 2019
ALBERT: A Lite BERT for Self-supervised Learning of Language
  Representations
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Zhenzhong Lan
Mingda Chen
Sebastian Goodman
Kevin Gimpel
Piyush Sharma
Radu Soricut
SSLAIMat
368
6,455
0
26 Sep 2019
Reducing Transformer Depth on Demand with Structured Dropout
Reducing Transformer Depth on Demand with Structured Dropout
Angela Fan
Edouard Grave
Armand Joulin
120
595
0
25 Sep 2019
Fixing the train-test resolution discrepancy
Fixing the train-test resolution discrepancy
Hugo Touvron
Andrea Vedaldi
Matthijs Douze
Hervé Jégou
119
423
0
14 Jun 2019
CutMix: Regularization Strategy to Train Strong Classifiers with
  Localizable Features
CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features
Sangdoo Yun
Dongyoon Han
Seong Joon Oh
Sanghyuk Chun
Junsuk Choe
Y. Yoo
OOD
620
4,780
0
13 May 2019
DropBlock: A regularization method for convolutional networks
DropBlock: A regularization method for convolutional networks
Golnaz Ghiasi
Nayeon Lee
Quoc V. Le
112
914
0
30 Oct 2018
Quantizing deep convolutional networks for efficient inference: A
  whitepaper
Quantizing deep convolutional networks for efficient inference: A whitepaper
Raghuraman Krishnamoorthi
MQ
141
1,016
0
21 Jun 2018
Neural Network Acceptability Judgments
Neural Network Acceptability Judgments
Alex Warstadt
Amanpreet Singh
Samuel R. Bowman
233
1,407
0
31 May 2018
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
1.1K
7,182
0
20 Apr 2018
Quantization and Training of Neural Networks for Efficient
  Integer-Arithmetic-Only Inference
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
Benoit Jacob
S. Kligys
Bo Chen
Menglong Zhu
Matthew Tang
Andrew G. Howard
Hartwig Adam
Dmitry Kalenichenko
MQ
156
3,130
0
15 Dec 2017
Improved Regularization of Convolutional Neural Networks with Cutout
Improved Regularization of Convolutional Neural Networks with Cutout
Terrance Devries
Graham W. Taylor
117
3,765
0
15 Aug 2017
A Broad-Coverage Challenge Corpus for Sentence Understanding through
  Inference
A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference
Adina Williams
Nikita Nangia
Samuel R. Bowman
524
4,479
0
18 Apr 2017
Sigmoid-Weighted Linear Units for Neural Network Function Approximation
  in Reinforcement Learning
Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning
Stefan Elfwing
E. Uchibe
Kenji Doya
133
1,728
0
10 Feb 2017
Pointer Sentinel Mixture Models
Pointer Sentinel Mixture Models
Stephen Merity
Caiming Xiong
James Bradbury
R. Socher
RALM
328
2,876
0
26 Sep 2016
Pruning Filters for Efficient ConvNets
Pruning Filters for Efficient ConvNets
Hao Li
Asim Kadav
Igor Durdanovic
H. Samet
H. Graf
3DPC
193
3,697
0
31 Aug 2016
Learning Structured Sparsity in Deep Neural Networks
Learning Structured Sparsity in Deep Neural Networks
W. Wen
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
181
2,339
0
12 Aug 2016
Gaussian Error Linear Units (GELUs)
Gaussian Error Linear Units (GELUs)
Dan Hendrycks
Kevin Gimpel
172
5,011
0
27 Jun 2016
SQuAD: 100,000+ Questions for Machine Comprehension of Text
SQuAD: 100,000+ Questions for Machine Comprehension of Text
Pranav Rajpurkar
Jian Zhang
Konstantin Lopyrev
Percy Liang
RALM
286
8,134
0
16 Jun 2016
Fast and Accurate Deep Network Learning by Exponential Linear Units
  (ELUs)
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
Djork-Arné Clevert
Thomas Unterthiner
Sepp Hochreiter
300
5,524
0
23 Nov 2015
Learning both Weights and Connections for Efficient Neural Networks
Learning both Weights and Connections for Efficient Neural Networks
Song Han
Jeff Pool
J. Tran
W. Dally
CVBM
313
6,681
0
08 Jun 2015
Batch Normalization: Accelerating Deep Network Training by Reducing
  Internal Covariate Shift
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
463
43,305
0
11 Feb 2015
Delving Deep into Rectifiers: Surpassing Human-Level Performance on
  ImageNet Classification
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
VLM
326
18,625
0
06 Feb 2015
Compressing Deep Convolutional Networks using Vector Quantization
Compressing Deep Convolutional Networks using Vector Quantization
Yunchao Gong
Liu Liu
Ming Yang
Lubomir D. Bourdev
MQ
165
1,170
0
18 Dec 2014
ImageNet Large Scale Visual Recognition Challenge
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLMObjD
1.7K
39,547
0
01 Sep 2014
1