ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1506.02626
  4. Cited By
Learning both Weights and Connections for Efficient Neural Networks

Learning both Weights and Connections for Efficient Neural Networks

8 June 2015
Song Han
Jeff Pool
J. Tran
W. Dally
    CVBM
ArXivPDFHTML

Papers citing "Learning both Weights and Connections for Efficient Neural Networks"

50 / 1,258 papers shown
Title
Class-Discriminative CNN Compression
Class-Discriminative CNN Compression
Yuchen Liu
D. Wentzlaff
S. Kung
31
1
0
21 Oct 2021
Adaptive Distillation: Aggregating Knowledge from Multiple Paths for
  Efficient Distillation
Adaptive Distillation: Aggregating Knowledge from Multiple Paths for Efficient Distillation
Sumanth Chennupati
Mohammad Mahdi Kamani
Zhongwei Cheng
Lin Chen
43
4
0
19 Oct 2021
S-Cyc: A Learning Rate Schedule for Iterative Pruning of ReLU-based
  Networks
S-Cyc: A Learning Rate Schedule for Iterative Pruning of ReLU-based Networks
Shiyu Liu
Chong Min John Tan
Mehul Motani
CLL
39
4
0
17 Oct 2021
Graph-less Neural Networks: Teaching Old MLPs New Tricks via
  Distillation
Graph-less Neural Networks: Teaching Old MLPs New Tricks via Distillation
Shichang Zhang
Yozen Liu
Yizhou Sun
Neil Shah
46
179
0
17 Oct 2021
BNAS v2: Learning Architectures for Binary Networks with Empirical
  Improvements
BNAS v2: Learning Architectures for Binary Networks with Empirical Improvements
Dahyun Kim
Kunal Pratap Singh
Jonghyun Choi
MQ
64
7
0
16 Oct 2021
Neural Network Pruning Through Constrained Reinforcement Learning
Neural Network Pruning Through Constrained Reinforcement Learning
Shehryar Malik
Muhammad Umair Haider
O. Iqbal
M. Taj
48
0
0
16 Oct 2021
A Unified Speaker Adaptation Approach for ASR
A Unified Speaker Adaptation Approach for ASR
Yingzhu Zhao
Chongjia Ni
C. Leung
Shafiq Joty
Chng Eng Siong
B. Ma
CLL
92
9
0
16 Oct 2021
Differentiable Network Pruning for Microcontrollers
Differentiable Network Pruning for Microcontrollers
Edgar Liberis
Nicholas D. Lane
29
18
0
15 Oct 2021
Joint Channel and Weight Pruning for Model Acceleration on Moblie
  Devices
Joint Channel and Weight Pruning for Model Acceleration on Moblie Devices
Tianli Zhao
Xi Sheryl Zhang
Wentao Zhu
Jiaxing Wang
Sen Yang
Ji Liu
Jian Cheng
61
2
0
15 Oct 2021
bert2BERT: Towards Reusable Pretrained Language Models
bert2BERT: Towards Reusable Pretrained Language Models
Cheng Chen
Yichun Yin
Lifeng Shang
Xin Jiang
Yujia Qin
Fengyu Wang
Zhi Wang
Xiao Chen
Zhiyuan Liu
Qun Liu
VLM
34
60
0
14 Oct 2021
LiST: Lite Prompted Self-training Makes Parameter-Efficient Few-shot
  Learners
LiST: Lite Prompted Self-training Makes Parameter-Efficient Few-shot Learners
Yaqing Wang
Subhabrata Mukherjee
Xiaodong Liu
Jing Gao
Ahmed Hassan Awadallah
Jianfeng Gao
VLM
BDL
56
10
0
12 Oct 2021
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
  on Pruned Neural Networks
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks
Shuai Zhang
Meng Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
UQCV
MLT
36
13
0
12 Oct 2021
ProgFed: Effective, Communication, and Computation Efficient Federated
  Learning by Progressive Training
ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training
Hui-Po Wang
Sebastian U. Stich
Yang He
Mario Fritz
FedML
AI4CE
41
49
0
11 Oct 2021
Weight Evolution: Improving Deep Neural Networks Training through
  Evolving Inferior Weight Values
Weight Evolution: Improving Deep Neural Networks Training through Evolving Inferior Weight Values
Zhenquan Lin
K. Guo
Xiaofen Xing
Xiangmin Xu
ODL
29
1
0
09 Oct 2021
LCS: Learning Compressible Subspaces for Adaptive Network Compression at
  Inference Time
LCS: Learning Compressible Subspaces for Adaptive Network Compression at Inference Time
Elvis Nunez
Maxwell Horton
Anish K. Prabhu
Anurag Ranjan
Ali Farhadi
Mohammad Rastegari
42
4
0
08 Oct 2021
End-to-End Supermask Pruning: Learning to Prune Image Captioning Models
End-to-End Supermask Pruning: Learning to Prune Image Captioning Models
J. Tan
C. Chan
Joon Huang Chuah
VLM
74
16
0
07 Oct 2021
One Timestep is All You Need: Training Spiking Neural Networks with
  Ultra Low Latency
One Timestep is All You Need: Training Spiking Neural Networks with Ultra Low Latency
Sayeed Shafayet Chowdhury
Nitin Rathi
Kaushik Roy
46
40
0
01 Oct 2021
Convolutional Neural Network Compression through Generalized Kronecker
  Product Decomposition
Convolutional Neural Network Compression through Generalized Kronecker Product Decomposition
Marawan Gamal Abdel Hameed
Marzieh S. Tahaei
A. Mosleh
V. Nia
47
25
0
29 Sep 2021
TSM: Temporal Shift Module for Efficient and Scalable Video
  Understanding on Edge Device
TSM: Temporal Shift Module for Efficient and Scalable Video Understanding on Edge Device
Ji Lin
Chuang Gan
Kuan-Chieh Wang
Song Han
45
64
0
27 Sep 2021
Deep Structured Instance Graph for Distilling Object Detectors
Deep Structured Instance Graph for Distilling Object Detectors
Yixin Chen
Pengguang Chen
Shu Liu
Liwei Wang
Jiaya Jia
ObjD
ISeg
23
12
0
27 Sep 2021
Neural network relief: a pruning algorithm based on neural activity
Neural network relief: a pruning algorithm based on neural activity
Aleksandr Dekhovich
David Tax
M. Sluiter
Miguel A. Bessa
57
10
0
22 Sep 2021
Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis
Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis
Zeyuan Yin
Ye Yuan
Panfeng Guo
Pan Zhou
FedML
45
7
0
22 Sep 2021
Structured Pattern Pruning Using Regularization
Structured Pattern Pruning Using Regularization
Dongju Park
Geunghee Lee
48
0
0
18 Sep 2021
RAPID-RL: A Reconfigurable Architecture with Preemptive-Exits for
  Efficient Deep-Reinforcement Learning
RAPID-RL: A Reconfigurable Architecture with Preemptive-Exits for Efficient Deep-Reinforcement Learning
Adarsh Kosta
Malik Aqeel Anwar
Priyadarshini Panda
A. Raychowdhury
Kaushik Roy
13
4
0
16 Sep 2021
AdaPruner: Adaptive Channel Pruning and Effective Weights Inheritance
AdaPruner: Adaptive Channel Pruning and Effective Weights Inheritance
Xiangcheng Liu
Jian Cao
Hongyi Yao
Wenyu Sun
Yuan Zhang
47
2
0
14 Sep 2021
Prioritized Subnet Sampling for Resource-Adaptive Supernet Training
Prioritized Subnet Sampling for Resource-Adaptive Supernet Training
Bohong Chen
Mingbao Lin
Rongrong Ji
Liujuan Cao
32
2
0
12 Sep 2021
BGT-Net: Bidirectional GRU Transformer Network for Scene Graph
  Generation
BGT-Net: Bidirectional GRU Transformer Network for Scene Graph Generation
Naina Dhingra
Florian Ritter
A. Kunz
75
37
0
11 Sep 2021
On the Compression of Neural Networks Using $\ell_0$-Norm Regularization
  and Weight Pruning
On the Compression of Neural Networks Using ℓ0\ell_0ℓ0​-Norm Regularization and Weight Pruning
F. Oliveira
E. Batista
R. Seara
41
9
0
10 Sep 2021
Block Pruning For Faster Transformers
Block Pruning For Faster Transformers
François Lagunas
Ella Charlaix
Victor Sanh
Alexander M. Rush
VLM
38
219
0
10 Sep 2021
Dynamic Collective Intelligence Learning: Finding Efficient Sparse Model
  via Refined Gradients for Pruned Weights
Dynamic Collective Intelligence Learning: Finding Efficient Sparse Model via Refined Gradients for Pruned Weights
Jang-Hyun Kim
Jayeon Yoo
Yeji Song
Kiyoon Yoo
Nojun Kwak
41
6
0
10 Sep 2021
MATE: Multi-view Attention for Table Transformer Efficiency
MATE: Multi-view Attention for Table Transformer Efficiency
Julian Martin Eisenschlos
Maharshi Gor
Thomas Müller
William W. Cohen
LMTD
75
95
0
09 Sep 2021
Fine-grained Data Distribution Alignment for Post-Training Quantization
Fine-grained Data Distribution Alignment for Post-Training Quantization
Mingliang Xu
Mingbao Lin
Mengzhao Chen
Ke Li
Yunhang Shen
Rongrong Ji
Yongjian Wu
Rongrong Ji
MQ
84
20
0
09 Sep 2021
Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning
Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning
Prasetya Ajie Utama
N. Moosavi
Victor Sanh
Iryna Gurevych
AAML
68
36
0
09 Sep 2021
Architecture Aware Latency Constrained Sparse Neural Networks
Architecture Aware Latency Constrained Sparse Neural Networks
Tianli Zhao
Qinghao Hu
Xiangyu He
Weixiang Xu
Jiaxing Wang
Cong Leng
Jian Cheng
46
0
0
01 Sep 2021
Multistage Pruning of CNN Based ECG Classifiers for Edge Devices
Multistage Pruning of CNN Based ECG Classifiers for Edge Devices
Xiaoling Li
R. Panicker
B. Cardiff
Deepu John
25
20
0
31 Aug 2021
Edge-Cloud Collaborated Object Detection via Difficult-Case
  Discriminator
Edge-Cloud Collaborated Object Detection via Difficult-Case Discriminator
Zhiqiang Cao
Zhijun Li
Pan Heng
Yongrui Chen
Daqi Xie
Jie Liu
30
12
0
29 Aug 2021
Layer-wise Model Pruning based on Mutual Information
Layer-wise Model Pruning based on Mutual Information
Chun Fan
Jiwei Li
Xiang Ao
Fei Wu
Yuxian Meng
Xiaofei Sun
53
19
0
28 Aug 2021
CoCo DistillNet: a Cross-layer Correlation Distillation Network for
  Pathological Gastric Cancer Segmentation
CoCo DistillNet: a Cross-layer Correlation Distillation Network for Pathological Gastric Cancer Segmentation
Wenxuan Zou
Muyi Sun
40
9
0
27 Aug 2021
Greenformers: Improving Computation and Memory Efficiency in Transformer
  Models via Low-Rank Approximation
Greenformers: Improving Computation and Memory Efficiency in Transformer Models via Low-Rank Approximation
Samuel Cahyawijaya
38
12
0
24 Aug 2021
Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture
  and Pruning Search
Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search
Zheng Zhan
Yifan Gong
Pu Zhao
Geng Yuan
Wei Niu
...
Malith Jayaweera
David Kaeli
Bin Ren
Xue Lin
Yanzhi Wang
SupR
43
41
0
18 Aug 2021
Differentiable Subset Pruning of Transformer Heads
Differentiable Subset Pruning of Transformer Heads
Jiaoda Li
Ryan Cotterell
Mrinmaya Sachan
77
54
0
10 Aug 2021
Group Fisher Pruning for Practical Network Compression
Group Fisher Pruning for Practical Network Compression
Liyang Liu
Shilong Zhang
Zhanghui Kuang
Aojun Zhou
Jingliang Xue
Xinjiang Wang
Yimin Chen
Wenming Yang
Q. Liao
Wayne Zhang
41
148
0
02 Aug 2021
Training Energy-Efficient Deep Spiking Neural Networks with Single-Spike
  Hybrid Input Encoding
Training Energy-Efficient Deep Spiking Neural Networks with Single-Spike Hybrid Input Encoding
Gourav Datta
Souvik Kundu
Peter A. Beerel
87
28
0
26 Jul 2021
Towards Low-Latency Energy-Efficient Deep SNNs via Attention-Guided
  Compression
Towards Low-Latency Energy-Efficient Deep SNNs via Attention-Guided Compression
Souvik Kundu
Gourav Datta
Massoud Pedram
Peter A. Beerel
28
14
0
16 Jul 2021
DANCE: DAta-Network Co-optimization for Efficient Segmentation Model Training and Inference
DANCE: DAta-Network Co-optimization for Efficient Segmentation Model Training and Inference
Chaojian Li
Wuyang Chen
Yuchen Gu
Tianlong Chen
Yonggan Fu
Zhangyang Wang
Yingyan Lin
41
0
0
16 Jul 2021
Training Compact CNNs for Image Classification using Dynamic-coded
  Filter Fusion
Training Compact CNNs for Image Classification using Dynamic-coded Filter Fusion
Mingbao Lin
Bohong Chen
Rongrong Ji
Rongrong Ji
VLM
45
23
0
14 Jul 2021
Data-Driven Low-Rank Neural Network Compression
Data-Driven Low-Rank Neural Network Compression
D. Papadimitriou
Swayambhoo Jain
BDL
29
3
0
13 Jul 2021
Weight Reparametrization for Budget-Aware Network Pruning
Weight Reparametrization for Budget-Aware Network Pruning
Robin Dupont
H. Sahbi
Guillaume Michel
31
1
0
08 Jul 2021
Pool of Experts: Realtime Querying Specialized Knowledge in Massive
  Neural Networks
Pool of Experts: Realtime Querying Specialized Knowledge in Massive Neural Networks
Hakbin Kim
Dong-Wan Choi
35
2
0
03 Jul 2021
Deep Ensembling with No Overhead for either Training or Testing: The
  All-Round Blessings of Dynamic Sparsity
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity
Shiwei Liu
Tianlong Chen
Zahra Atashgahi
Xiaohan Chen
Ghada Sokar
Elena Mocanu
Mykola Pechenizkiy
Zhangyang Wang
Decebal Constantin Mocanu
OOD
36
50
0
28 Jun 2021
Previous
123...101112...242526
Next