ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1506.02626
  4. Cited By
Learning both Weights and Connections for Efficient Neural Networks

Learning both Weights and Connections for Efficient Neural Networks

8 June 2015
Song Han
Jeff Pool
J. Tran
W. Dally
    CVBM
ArXivPDFHTML

Papers citing "Learning both Weights and Connections for Efficient Neural Networks"

50 / 1,220 papers shown
Title
Improve Convolutional Neural Network Pruning by Maximizing Filter
  Variety
Improve Convolutional Neural Network Pruning by Maximizing Filter Variety
Nathan Hubens
M. Mancas
B. Gosselin
Marius Preda
T. Zaharia
26
2
0
11 Mar 2022
Shfl-BW: Accelerating Deep Neural Network Inference with Tensor-Core
  Aware Weight Pruning
Shfl-BW: Accelerating Deep Neural Network Inference with Tensor-Core Aware Weight Pruning
Guyue Huang
Haoran Li
Minghai Qin
Fei Sun
Yufei Din
Yuan Xie
43
18
0
09 Mar 2022
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another
  in Neural Networks
The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks
Xin Yu
Thiago Serra
Srikumar Ramalingam
Shandian Zhe
49
48
0
09 Mar 2022
Pruning Graph Convolutional Networks to select meaningful graph
  frequencies for fMRI decoding
Pruning Graph Convolutional Networks to select meaningful graph frequencies for fMRI decoding
Yassine El Ouahidi
Hugo Tessier
G. Lioi
Nicolas Farrugia
Bastien Pasdeloup
Vincent Gripon
GNN
48
2
0
09 Mar 2022
Dual Lottery Ticket Hypothesis
Dual Lottery Ticket Hypothesis
Yue Bai
Haiquan Wang
Zhiqiang Tao
Kunpeng Li
Yun Fu
42
38
0
08 Mar 2022
Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing
  Performance
Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing Performance
Shiwei Liu
Yuesong Tian
Tianlong Chen
Li Shen
49
9
0
05 Mar 2022
Structured Pruning is All You Need for Pruning CNNs at Initialization
Structured Pruning is All You Need for Pruning CNNs at Initialization
Yaohui Cai
Weizhe Hua
Hongzheng Chen
G. E. Suh
Christopher De Sa
Zhiru Zhang
CVBM
54
14
0
04 Mar 2022
DCT-Former: Efficient Self-Attention with Discrete Cosine Transform
DCT-Former: Efficient Self-Attention with Discrete Cosine Transform
Carmelo Scribano
Giorgia Franchini
M. Prato
Marko Bertogna
30
21
0
02 Mar 2022
Extracting Effective Subnetworks with Gumbel-Softmax
Extracting Effective Subnetworks with Gumbel-Softmax
Robin Dupont
M. Alaoui
H. Sahbi
A. Lebois
22
6
0
25 Feb 2022
Learn From the Past: Experience Ensemble Knowledge Distillation
Learn From the Past: Experience Ensemble Knowledge Distillation
Chaofei Wang
Shaowei Zhang
S. Song
Gao Huang
42
4
0
25 Feb 2022
The rise of the lottery heroes: why zero-shot pruning is hard
The rise of the lottery heroes: why zero-shot pruning is hard
Enzo Tartaglione
34
6
0
24 Feb 2022
Highly-Efficient Binary Neural Networks for Visual Place Recognition
Highly-Efficient Binary Neural Networks for Visual Place Recognition
Bruno Ferrarini
Michael Milford
Klaus D. McDonald-Maier
Shoaib Ehsan
29
7
0
24 Feb 2022
Rare Gems: Finding Lottery Tickets at Initialization
Rare Gems: Finding Lottery Tickets at Initialization
Kartik K. Sreenivasan
Jy-yong Sohn
Liu Yang
Matthew Grinde
Alliot Nagle
Hongyi Wang
Eric P. Xing
Kangwook Lee
Dimitris Papailiopoulos
37
42
0
24 Feb 2022
Distilled Neural Networks for Efficient Learning to Rank
Distilled Neural Networks for Efficient Learning to Rank
F. M. Nardini
Cosimo Rulli
Salvatore Trani
Rossano Venturini
FedML
29
16
0
22 Feb 2022
HRel: Filter Pruning based on High Relevance between Activation Maps and
  Class Labels
HRel: Filter Pruning based on High Relevance between Activation Maps and Class Labels
C. Sarvani
Mrinmoy Ghorai
S. Dubey
S. H. Shabbeer Basha
VLM
63
37
0
22 Feb 2022
Online Learning for Orchestration of Inference in Multi-User
  End-Edge-Cloud Networks
Online Learning for Orchestration of Inference in Multi-User End-Edge-Cloud Networks
Sina Shahhosseini
Dongjoo Seo
A. Kanduri
Tianyi Hu
Sung-Soo Lim
Bryan Donyanavard
Amir M.Rahmani
N. Dutt
52
17
0
21 Feb 2022
ICSML: Industrial Control Systems ML Framework for native inference
  using IEC 61131-3 code
ICSML: Industrial Control Systems ML Framework for native inference using IEC 61131-3 code
Constantine Doumanidis
Prashant Hari Narayan Rajput
Michail Maniatakos
35
2
0
21 Feb 2022
Sparsity Winning Twice: Better Robust Generalization from More Efficient
  Training
Sparsity Winning Twice: Better Robust Generalization from More Efficient Training
Tianlong Chen
Zhenyu Zhang
Pengju Wang
Santosh Balachandra
Haoyu Ma
Zehao Wang
Zhangyang Wang
OOD
AAML
100
47
0
20 Feb 2022
Convolutional Network Fabric Pruning With Label Noise
Convolutional Network Fabric Pruning With Label Noise
Ilias Benjelloun
B. Lamiroy
E. Koudou
19
0
0
15 Feb 2022
Pruning Networks with Cross-Layer Ranking & k-Reciprocal Nearest Filters
Pruning Networks with Cross-Layer Ranking & k-Reciprocal Nearest Filters
Mingbao Lin
Liujuan Cao
Yuxin Zhang
Ling Shao
Chia-Wen Lin
Rongrong Ji
37
51
0
15 Feb 2022
Finding Dynamics Preserving Adversarial Winning Tickets
Finding Dynamics Preserving Adversarial Winning Tickets
Xupeng Shi
Pengfei Zheng
Adam Ding
Yuan Gao
Weizhong Zhang
AAML
34
1
0
14 Feb 2022
SQuant: On-the-Fly Data-Free Quantization via Diagonal Hessian
  Approximation
SQuant: On-the-Fly Data-Free Quantization via Diagonal Hessian Approximation
Cong Guo
Yuxian Qiu
Jingwen Leng
Xiaotian Gao
Chen Zhang
Yunxin Liu
Fan Yang
Yuhao Zhu
Minyi Guo
MQ
74
73
0
14 Feb 2022
Deadwooding: Robust Global Pruning for Deep Neural Networks
Deadwooding: Robust Global Pruning for Deep Neural Networks
Sawinder Kaur
Ferdinando Fioretto
Asif Salekin
43
4
0
10 Feb 2022
Quantization in Layer's Input is Matter
Quantization in Layer's Input is Matter
Daning Cheng
Wenguang Chen
MQ
16
0
0
10 Feb 2022
Improving the Sample-Complexity of Deep Classification Networks with
  Invariant Integration
Improving the Sample-Complexity of Deep Classification Networks with Invariant Integration
M. Rath
Alexandru Paul Condurache
35
8
0
08 Feb 2022
DistrEdge: Speeding up Convolutional Neural Network Inference on
  Distributed Edge Devices
DistrEdge: Speeding up Convolutional Neural Network Inference on Distributed Edge Devices
Xueyu Hou
Yongjie Guan
Tao Han
Ning Zhang
26
41
0
03 Feb 2022
Comparative assessment of federated and centralized machine learning
Comparative assessment of federated and centralized machine learning
Ibrahim Abdul Majeed
Sagar Kaushik
Aniruddha Bardhan
Venkata Siva Kumar Tadi
Hwang-Ki Min
K. Kumaraguru
Rajasekhara Reddy Duvvuru Muni
FedML
31
6
0
03 Feb 2022
Robust Binary Models by Pruning Randomly-initialized Networks
Robust Binary Models by Pruning Randomly-initialized Networks
Chen Liu
Ziqi Zhao
Sabine Süsstrunk
Mathieu Salzmann
TPM
AAML
MQ
39
4
0
03 Feb 2022
Cyclical Pruning for Sparse Neural Networks
Cyclical Pruning for Sparse Neural Networks
Suraj Srinivas
Andrey Kuzmin
Markus Nagel
M. V. Baalen
Andrii Skliar
Tijmen Blankevoort
48
13
0
02 Feb 2022
Automotive Parts Assessment: Applying Real-time Instance-Segmentation
  Models to Identify Vehicle Parts
Automotive Parts Assessment: Applying Real-time Instance-Segmentation Models to Identify Vehicle Parts
S. Yusuf
Abdulmalik Aldawsari
R. Souissi
34
3
0
02 Feb 2022
Recycling Model Updates in Federated Learning: Are Gradient Subspaces
  Low-Rank?
Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank?
Sheikh Shams Azam
Seyyedali Hosseinalipour
Qiang Qiu
Christopher G. Brinton
FedML
64
20
0
01 Feb 2022
Signing the Supermask: Keep, Hide, Invert
Signing the Supermask: Keep, Hide, Invert
Nils Koster
O. Grothe
Achim Rettinger
36
11
0
31 Jan 2022
On the Convergence of Heterogeneous Federated Learning with Arbitrary
  Adaptive Online Model Pruning
On the Convergence of Heterogeneous Federated Learning with Arbitrary Adaptive Online Model Pruning
Hanhan Zhou
Tian-Shing Lan
Guru Venkataramani
Wenbo Ding
FedML
37
6
0
27 Jan 2022
Resource-efficient Deep Neural Networks for Automotive Radar
  Interference Mitigation
Resource-efficient Deep Neural Networks for Automotive Radar Interference Mitigation
J. Rock
Wolfgang Roth
Máté Tóth
Paul Meissner
Franz Pernkopf
35
43
0
25 Jan 2022
Iterative Activation-based Structured Pruning
Iterative Activation-based Structured Pruning
Kaiqi Zhao
Animesh Jain
Ming Zhao
47
0
0
22 Jan 2022
MetaV: A Meta-Verifier Approach to Task-Agnostic Model Fingerprinting
MetaV: A Meta-Verifier Approach to Task-Agnostic Model Fingerprinting
Xudong Pan
Yifan Yan
Mi Zhang
Min Yang
32
23
0
19 Jan 2022
Pruning-aware Sparse Regularization for Network Pruning
Pruning-aware Sparse Regularization for Network Pruning
Nanfei Jiang
Xu Zhao
Chaoyang Zhao
Yongqi An
Ming Tang
Jinqiao Wang
3DPC
29
12
0
18 Jan 2022
UDC: Unified DNAS for Compressible TinyML Models
UDC: Unified DNAS for Compressible TinyML Models
Igor Fedorov
Ramon Matas
Hokchhay Tann
Chu Zhou
Matthew Mattina
P. Whatmough
AI4CE
36
13
0
15 Jan 2022
Weighting and Pruning based Ensemble Deep Random Vector Functional Link
  Network for Tabular Data Classification
Weighting and Pruning based Ensemble Deep Random Vector Functional Link Network for Tabular Data Classification
Qi-Shi Shi
Ponnuthurai Nagaratnam Suganthan
Rakesh Katuwal
18
22
0
15 Jan 2022
Recursive Least Squares for Training and Pruning Convolutional Neural
  Networks
Recursive Least Squares for Training and Pruning Convolutional Neural Networks
Tianzong Yu
Chunyuan Zhang
Yuan Wang
Meng-tao Ma
Qingwei Song
46
1
0
13 Jan 2022
GhostNets on Heterogeneous Devices via Cheap Operations
GhostNets on Heterogeneous Devices via Cheap Operations
Kai Han
Yunhe Wang
Chang Xu
Jianyuan Guo
Chunjing Xu
Enhua Wu
Qi Tian
24
103
0
10 Jan 2022
Problem-dependent attention and effort in neural networks with
  applications to image resolution and model selection
Problem-dependent attention and effort in neural networks with applications to image resolution and model selection
Chris Rohlfs
42
4
0
05 Jan 2022
Speedup deep learning models on GPU by taking advantage of efficient
  unstructured pruning and bit-width reduction
Speedup deep learning models on GPU by taking advantage of efficient unstructured pruning and bit-width reduction
Marcin Pietroñ
Dominik Zurek
35
13
0
28 Dec 2021
Over-the-Air Federated Multi-Task Learning Over MIMO Multiple Access
  Channels
Over-the-Air Federated Multi-Task Learning Over MIMO Multiple Access Channels
Chen Zhong
Huiyuan Yang
Xiaojun Yuan
40
28
0
27 Dec 2021
Learning Robust and Lightweight Model through Separable Structured
  Transformations
Learning Robust and Lightweight Model through Separable Structured Transformations
Xian Wei
Yanhui Huang
Yang Xu
Mingsong Chen
Hai Lan
Yuanxiang Li
Zhongfeng Wang
Xuan Tang
OOD
32
0
0
27 Dec 2021
GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design
GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design
Sung Une Lee
Boming Xia
Yongan Zhang
Ang Li
Yingyan Lin
GNN
65
48
0
22 Dec 2021
RepMLPNet: Hierarchical Vision MLP with Re-parameterized Locality
RepMLPNet: Hierarchical Vision MLP with Re-parameterized Locality
Xiaohan Ding
Honghao Chen
Xinming Zhang
Jungong Han
Guiguang Ding
30
71
0
21 Dec 2021
Compact Multi-level Sparse Neural Networks with Input Independent
  Dynamic Rerouting
Compact Multi-level Sparse Neural Networks with Input Independent Dynamic Rerouting
Minghai Qin
Tianyun Zhang
Fei Sun
Yen-kuang Chen
M. Fardad
Yanzhi Wang
Yuan Xie
60
0
0
21 Dec 2021
Load-balanced Gather-scatter Patterns for Sparse Deep Neural Networks
Load-balanced Gather-scatter Patterns for Sparse Deep Neural Networks
Fei Sun
Minghai Qin
Tianyun Zhang
Xiaolong Ma
Haoran Li
Junwen Luo
Zihao Zhao
Yen-kuang Chen
Yuan Xie
33
1
0
20 Dec 2021
Controlling the Quality of Distillation in Response-Based Network
  Compression
Controlling the Quality of Distillation in Response-Based Network Compression
Vibhas Kumar Vats
David J. Crandall
26
1
0
19 Dec 2021
Previous
123...8910...232425
Next