ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1506.02626
  4. Cited By
Learning both Weights and Connections for Efficient Neural Networks

Learning both Weights and Connections for Efficient Neural Networks

8 June 2015
Song Han
Jeff Pool
J. Tran
W. Dally
    CVBM
ArXivPDFHTML

Papers citing "Learning both Weights and Connections for Efficient Neural Networks"

50 / 1,147 papers shown
Title
SR-init: An interpretable layer pruning method
SR-init: An interpretable layer pruning method
Hui Tang
Yao Lu
Qi Xuan
20
9
0
14 Mar 2023
Automatic Attention Pruning: Improving and Automating Model Pruning
  using Attentions
Automatic Attention Pruning: Improving and Automating Model Pruning using Attentions
Kaiqi Zhao
Animesh Jain
Ming Zhao
31
10
0
14 Mar 2023
Can Adversarial Examples Be Parsed to Reveal Victim Model Information?
Can Adversarial Examples Be Parsed to Reveal Victim Model Information?
Yuguang Yao
Jiancheng Liu
Yifan Gong
Xiaoming Liu
Yanzhi Wang
Xinyu Lin
Sijia Liu
AAML
MLAU
42
1
0
13 Mar 2023
On Model Compression for Neural Networks: Framework, Algorithm, and
  Convergence Guarantee
On Model Compression for Neural Networks: Framework, Algorithm, and Convergence Guarantee
Chenyang Li
Jihoon Chung
Mengnan Du
Haimin Wang
Xianlian Zhou
Bohao Shen
33
1
0
13 Mar 2023
DSD$^2$: Can We Dodge Sparse Double Descent and Compress the Neural
  Network Worry-Free?
DSD2^22: Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free?
Victor Quétu
Enzo Tartaglione
37
7
0
02 Mar 2023
Average of Pruning: Improving Performance and Stability of
  Out-of-Distribution Detection
Average of Pruning: Improving Performance and Stability of Out-of-Distribution Detection
Zhen Cheng
Fei Zhu
Xu-Yao Zhang
Cheng-Lin Liu
MoMe
OODD
45
11
0
02 Mar 2023
Balanced Training for Sparse GANs
Balanced Training for Sparse GANs
Yite Wang
Jing Wu
N. Hovakimyan
Ruoyu Sun
53
9
0
28 Feb 2023
Fast as CHITA: Neural Network Pruning with Combinatorial Optimization
Fast as CHITA: Neural Network Pruning with Combinatorial Optimization
Riade Benbaki
Wenyu Chen
X. Meng
Hussein Hazimeh
Natalia Ponomareva
Zhe Zhao
Rahul Mazumder
26
27
0
28 Feb 2023
Structured Pruning of Self-Supervised Pre-trained Models for Speech
  Recognition and Understanding
Structured Pruning of Self-Supervised Pre-trained Models for Speech Recognition and Understanding
Yifan Peng
Kwangyoun Kim
Felix Wu
Prashant Sridhar
Shinji Watanabe
37
34
0
27 Feb 2023
Can we avoid Double Descent in Deep Neural Networks?
Can we avoid Double Descent in Deep Neural Networks?
Victor Quétu
Enzo Tartaglione
AI4CE
25
3
0
26 Feb 2023
A Unified Framework for Soft Threshold Pruning
A Unified Framework for Soft Threshold Pruning
Yanqing Chen
Zhengyu Ma
Wei Fang
Xiawu Zheng
Zhaofei Yu
Yonghong Tian
93
19
0
25 Feb 2023
LightCTS: A Lightweight Framework for Correlated Time Series Forecasting
LightCTS: A Lightweight Framework for Correlated Time Series Forecasting
Zhichen Lai
Dalin Zhang
Huan Li
Christian S. Jensen
Hua Lu
Yan Zhao
AI4TS
35
29
0
23 Feb 2023
Modular Deep Learning
Modular Deep Learning
Jonas Pfeiffer
Sebastian Ruder
Ivan Vulić
Edoardo Ponti
MoMe
OOD
41
72
0
22 Feb 2023
Considering Layerwise Importance in the Lottery Ticket Hypothesis
Considering Layerwise Importance in the Lottery Ticket Hypothesis
Benjamin Vandersmissen
José Oramas
37
1
0
22 Feb 2023
Oriented Object Detection in Optical Remote Sensing Images using Deep Learning: A Survey
Oriented Object Detection in Optical Remote Sensing Images using Deep Learning: A Survey
Kunlin Wang
Zi Wang
Zhang Li
Ang Su
Xichao Teng
Minhao Liu
Qifeng Yu
Qifeng Yu
ObjD
89
9
0
21 Feb 2023
Multiobjective Evolutionary Pruning of Deep Neural Networks with
  Transfer Learning for improving their Performance and Robustness
Multiobjective Evolutionary Pruning of Deep Neural Networks with Transfer Learning for improving their Performance and Robustness
Javier Poyatos
Daniel Molina
Aitor Martínez
Javier Del Ser
Francisco Herrera
39
10
0
20 Feb 2023
HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained
  Transformers
HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers
Chen Liang
Haoming Jiang
Zheng Li
Xianfeng Tang
Bin Yin
Tuo Zhao
VLM
37
24
0
19 Feb 2023
On Handling Catastrophic Forgetting for Incremental Learning of Human
  Physical Activity on the Edge
On Handling Catastrophic Forgetting for Incremental Learning of Human Physical Activity on the Edge
Jingwei Zuo
George Arvanitakis
Hakim Hacid
26
4
0
18 Feb 2023
Less is More: The Influence of Pruning on the Explainability of CNNs
Less is More: The Influence of Pruning on the Explainability of CNNs
David Weber
F. Merkle
Pascal Schöttle
Stephan Schlögl
Martin Nocker
FAtt
39
1
0
17 Feb 2023
Learning a Consensus Sub-Network with Polarization Regularization and One Pass Training
Learning a Consensus Sub-Network with Polarization Regularization and One Pass Training
Xiaoying Zhi
Varun Babbar
P. Sun
Fran Silavong
Ruibo Shi
Sean J. Moran
Sean Moran
60
1
0
17 Feb 2023
VEGETA: Vertically-Integrated Extensions for Sparse/Dense GEMM Tile
  Acceleration on CPUs
VEGETA: Vertically-Integrated Extensions for Sparse/Dense GEMM Tile Acceleration on CPUs
Geonhwa Jeong
S. Damani
Abhimanyu Bambhaniya
Eric Qin
C. Hughes
S. Subramoney
Hyesoon Kim
T. Krishna
MoE
51
24
0
17 Feb 2023
DP-BART for Privatized Text Rewriting under Local Differential Privacy
DP-BART for Privatized Text Rewriting under Local Differential Privacy
Timour Igamberdiev
Ivan Habernal
28
17
0
15 Feb 2023
Simple Hardware-Efficient Long Convolutions for Sequence Modeling
Simple Hardware-Efficient Long Convolutions for Sequence Modeling
Daniel Y. Fu
Elliot L. Epstein
Eric N. D. Nguyen
A. Thomas
Michael Zhang
Tri Dao
Atri Rudra
Christopher Ré
25
52
0
13 Feb 2023
Bi-directional Masks for Efficient N:M Sparse Training
Bi-directional Masks for Efficient N:M Sparse Training
Yuxin Zhang
Yiting Luo
Mingbao Lin
Mingliang Xu
Jingjing Xie
Rongrong Ji
Rongrong Ji
52
15
0
13 Feb 2023
Revisiting Offline Compression: Going Beyond Factorization-based Methods
  for Transformer Language Models
Revisiting Offline Compression: Going Beyond Factorization-based Methods for Transformer Language Models
Mohammadreza Banaei
Klaudia Bałazy
Artur Kasymov
R. Lebret
Jacek Tabor
Karl Aberer
OffRL
21
0
0
08 Feb 2023
What Matters In The Structured Pruning of Generative Language Models?
What Matters In The Structured Pruning of Generative Language Models?
Michael Santacroce
Zixin Wen
Yelong Shen
Yuan-Fang Li
31
33
0
07 Feb 2023
Fast, Differentiable and Sparse Top-k: a Convex Analysis Perspective
Fast, Differentiable and Sparse Top-k: a Convex Analysis Perspective
Michael E. Sander
J. Puigcerver
Josip Djolonga
Gabriel Peyré
Mathieu Blondel
26
19
0
02 Feb 2023
An Empirical Study on the Transferability of Transformer Modules in
  Parameter-Efficient Fine-Tuning
An Empirical Study on the Transferability of Transformer Modules in Parameter-Efficient Fine-Tuning
Mohammad AkbarTajari
S. Rajaee
Mohammad Taher Pilehvar
21
2
0
01 Feb 2023
Towards Inference Efficient Deep Ensemble Learning
Towards Inference Efficient Deep Ensemble Learning
Ziyue Li
Kan Ren
Yifan Yang
Xinyang Jiang
Yuqing Yang
Dongsheng Li
BDL
34
12
0
29 Jan 2023
Open Problems in Applied Deep Learning
Open Problems in Applied Deep Learning
M. Raissi
AI4CE
57
2
0
26 Jan 2023
BiBench: Benchmarking and Analyzing Network Binarization
BiBench: Benchmarking and Analyzing Network Binarization
Haotong Qin
Mingyuan Zhang
Yifu Ding
Aoyu Li
Zhongang Cai
Ziwei Liu
Feng Yu
Xianglong Liu
MQ
AAML
49
37
0
26 Jan 2023
Rewarded meta-pruning: Meta Learning with Rewards for Channel Pruning
Rewarded meta-pruning: Meta Learning with Rewards for Channel Pruning
Athul Shibu
Abhishek Kumar
Heechul Jung
Dong-Gyu Lee
19
1
0
26 Jan 2023
Accelerating and Compressing Deep Neural Networks for Massive MIMO CSI
  Feedback
Accelerating and Compressing Deep Neural Networks for Massive MIMO CSI Feedback
O. Erak
H. Abou-zeid
55
5
0
20 Jan 2023
HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural
  Networks
HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks
Jinqi Xiao
Chengming Zhang
Yu Gong
Miao Yin
Yang Sui
Lizhi Xiang
Dingwen Tao
Bo Yuan
37
19
0
20 Jan 2023
Getting Away with More Network Pruning: From Sparsity to Geometry and
  Linear Regions
Getting Away with More Network Pruning: From Sparsity to Geometry and Linear Regions
Junyang Cai
Khai-Nguyen Nguyen
Nishant Shrestha
Aidan Good
Ruisen Tu
Xin Yu
Shandian Zhe
Thiago Serra
MLT
45
8
0
19 Jan 2023
RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of
  Quantized CNNs
RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of Quantized CNNs
A. M. Ribeiro-dos-Santos
João Dinis Ferreira
O. Mutlu
G. Falcão
MQ
26
1
0
15 Jan 2023
Why is the State of Neural Network Pruning so Confusing? On the
  Fairness, Comparison Setup, and Trainability in Network Pruning
Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Pruning
Huan Wang
Can Qin
Yue Bai
Yun Fu
39
20
0
12 Jan 2023
Pruning Compact ConvNets for Efficient Inference
Pruning Compact ConvNets for Efficient Inference
Sayan Ghosh
Karthik Prasad
Xiaoliang Dai
Peizhao Zhang
Bichen Wu
Graham Cormode
Peter Vajda
VLM
34
4
0
11 Jan 2023
Explainability and Robustness of Deep Visual Classification Models
Explainability and Robustness of Deep Visual Classification Models
Jindong Gu
AAML
52
2
0
03 Jan 2023
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot
Elias Frantar
Dan Alistarh
VLM
49
643
0
02 Jan 2023
Thermal Heating in ReRAM Crossbar Arrays: Challenges and Solutions
Thermal Heating in ReRAM Crossbar Arrays: Challenges and Solutions
Kamilya Smagulova
M. Fouda
Ahmed M. Eltawil
26
1
0
28 Dec 2022
COLT: Cyclic Overlapping Lottery Tickets for Faster Pruning of Convolutional Neural Networks
COLT: Cyclic Overlapping Lottery Tickets for Faster Pruning of Convolutional Neural Networks
Md. Ismail Hossain
Mohammed Rakib
M. M. L. Elahi
Nabeel Mohammed
Shafin Rahman
26
1
0
24 Dec 2022
Pruning On-the-Fly: A Recoverable Pruning Method without Fine-tuning
Pruning On-the-Fly: A Recoverable Pruning Method without Fine-tuning
Danyang Liu
Xue Liu
32
0
0
24 Dec 2022
Mind Your Heart: Stealthy Backdoor Attack on Dynamic Deep Neural Network
  in Edge Computing
Mind Your Heart: Stealthy Backdoor Attack on Dynamic Deep Neural Network in Edge Computing
Tian Dong
Ziyuan Zhang
Han Qiu
Tianwei Zhang
Hewu Li
T. Wang
AAML
33
6
0
22 Dec 2022
Exploring Optimal Substructure for Out-of-distribution Generalization
  via Feature-targeted Model Pruning
Exploring Optimal Substructure for Out-of-distribution Generalization via Feature-targeted Model Pruning
Yingchun Wang
Jingcai Guo
Song Guo
Weizhan Zhang
Jiewei Zhang
OODD
44
16
0
19 Dec 2022
Training Lightweight Graph Convolutional Networks with Phase-field
  Models
Training Lightweight Graph Convolutional Networks with Phase-field Models
H. Sahbi
34
0
0
19 Dec 2022
FSCNN: A Fast Sparse Convolution Neural Network Inference System
FSCNN: A Fast Sparse Convolution Neural Network Inference System
Bo Ji
Tianyi Chen
28
3
0
17 Dec 2022
Can We Find Strong Lottery Tickets in Generative Models?
Can We Find Strong Lottery Tickets in Generative Models?
Sangyeop Yeo
Yoojin Jang
Jy-yong Sohn
Dongyoon Han
Jaejun Yoo
22
6
0
16 Dec 2022
Gradient-based Intra-attention Pruning on Pre-trained Language Models
Gradient-based Intra-attention Pruning on Pre-trained Language Models
Ziqing Yang
Yiming Cui
Xin Yao
Shijin Wang
VLM
42
8
0
15 Dec 2022
AP: Selective Activation for De-sparsifying Pruned Neural Networks
AP: Selective Activation for De-sparsifying Pruned Neural Networks
Shiyu Liu
Rohan Ghosh
Dylan Tan
Mehul Motani
AAML
28
0
0
09 Dec 2022
Previous
123456...212223
Next