ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.03665
  4. Cited By
Learning Structured Sparsity in Deep Neural Networks

Learning Structured Sparsity in Deep Neural Networks

12 August 2016
W. Wen
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
ArXivPDFHTML

Papers citing "Learning Structured Sparsity in Deep Neural Networks"

50 / 331 papers shown
Title
Sparsity is All You Need: Rethinking Biological Pathway-Informed Approaches in Deep Learning
Sparsity is All You Need: Rethinking Biological Pathway-Informed Approaches in Deep Learning
Isabella Caranzano
Corrado Pancotti
Cesare Rollo
Flavio Sartori
Pietro Liò
P. Fariselli
Tiziana Sanavia
OOD
UQCV
65
0
0
07 May 2025
How to Train Your Metamorphic Deep Neural Network
How to Train Your Metamorphic Deep Neural Network
Thomas Sommariva
Simone Calderara
Angelo Porrello
28
0
0
07 May 2025
PROM: Prioritize Reduction of Multiplications Over Lower Bit-Widths for Efficient CNNs
PROM: Prioritize Reduction of Multiplications Over Lower Bit-Widths for Efficient CNNs
Lukas Meiner
Jens Mehnert
A. P. Condurache
MQ
42
0
0
06 May 2025
Representation Retrieval Learning for Heterogeneous Data Integration
Qi Xu
Annie Qu
55
0
0
12 Mar 2025
DSMoE: Matrix-Partitioned Experts with Dynamic Routing for Computation-Efficient Dense LLMs
Minxuan Lv
Zhenpeng Su
Leiyu Pan
Yizhe Xiong
Zijia Lin
...
Guiguang Ding
Cheng Luo
Di Zhang
Kun Gai
Songlin Hu
MoE
41
0
0
18 Feb 2025
Forget the Data and Fine-Tuning! Just Fold the Network to Compress
Forget the Data and Fine-Tuning! Just Fold the Network to Compress
Dong Wang
Haris Šikić
Lothar Thiele
O. Saukh
59
0
0
17 Feb 2025
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries
Chris Kolb
T. Weber
Bernd Bischl
David Rügamer
113
0
0
04 Feb 2025
Information Consistent Pruning: How to Efficiently Search for Sparse Networks?
Soheil Gharatappeh
Salimeh Yasaei Sekeh
59
0
0
28 Jan 2025
Tailored-LLaMA: Optimizing Few-Shot Learning in Pruned LLaMA Models with Task-Specific Prompts
Tailored-LLaMA: Optimizing Few-Shot Learning in Pruned LLaMA Models with Task-Specific Prompts
Danyal Aftab
Steven Davy
ALM
49
0
0
10 Jan 2025
Edge-device Collaborative Computing for Multi-view Classification
Edge-device Collaborative Computing for Multi-view Classification
Marco Palena
Tania Cerquitelli
Carla Fabiana Chiasserini
30
3
0
24 Sep 2024
HESSO: Towards Automatic Efficient and User Friendly Any Neural Network Training and Pruning
HESSO: Towards Automatic Efficient and User Friendly Any Neural Network Training and Pruning
Tianyi Chen
Xiaoyi Qu
David Aponte
Colby R. Banbury
Jongwoo Ko
Tianyu Ding
Yong Ma
Vladimir Lyapunov
Ilya Zharkov
Luming Liang
83
1
0
11 Sep 2024
Mask in the Mirror: Implicit Sparsification
Mask in the Mirror: Implicit Sparsification
Tom Jacobs
R. Burkholz
47
3
0
19 Aug 2024
AdapMTL: Adaptive Pruning Framework for Multitask Learning Model
AdapMTL: Adaptive Pruning Framework for Multitask Learning Model
Mingcan Xiang
Steven Jiaxun Tang
Qizheng Yang
Hui Guan
Tongping Liu
VLM
39
0
0
07 Aug 2024
Characterizing Disparity Between Edge Models and High-Accuracy Base
  Models for Vision Tasks
Characterizing Disparity Between Edge Models and High-Accuracy Base Models for Vision Tasks
Zhenyu Wang
S. Nirjon
32
0
0
13 Jul 2024
Isomorphic Pruning for Vision Models
Isomorphic Pruning for Vision Models
Gongfan Fang
Xinyin Ma
Michael Bi Mi
Xinchao Wang
VLM
ViT
42
6
0
05 Jul 2024
Effective Interplay between Sparsity and Quantization: From Theory to Practice
Effective Interplay between Sparsity and Quantization: From Theory to Practice
Simla Burcu Harma
Ayan Chakraborty
Elizaveta Kostenok
Danila Mishin
Dongho Ha
...
Martin Jaggi
Ming Liu
Yunho Oh
Suvinay Subramanian
Amir Yazdanbakhsh
MQ
44
6
0
31 May 2024
Scorch: A Library for Sparse Deep Learning
Scorch: A Library for Sparse Deep Learning
Bobby Yan
Alexander J. Root
Trevor Gale
David Broman
Fredrik Kjolstad
33
0
0
27 May 2024
Iterative Filter Pruning for Concatenation-based CNN Architectures
Iterative Filter Pruning for Concatenation-based CNN Architectures
Svetlana Pavlitska
Oliver Bagge
Federico Nicolás Peccia
Toghrul Mammadov
J. Marius Zöllner
VLM
3DPC
35
2
0
04 May 2024
Rapid Deployment of DNNs for Edge Computing via Structured Pruning at
  Initialization
Rapid Deployment of DNNs for Edge Computing via Structured Pruning at Initialization
Bailey J. Eccles
Leon Wong
Blesson Varghese
33
2
0
22 Apr 2024
The Unreasonable Ineffectiveness of the Deeper Layers
The Unreasonable Ineffectiveness of the Deeper Layers
Andrey Gromov
Kushal Tirumala
Hassan Shapourian
Paolo Glorioso
Daniel A. Roberts
52
81
0
26 Mar 2024
SequentialAttention++ for Block Sparsification: Differentiable Pruning Meets Combinatorial Optimization
SequentialAttention++ for Block Sparsification: Differentiable Pruning Meets Combinatorial Optimization
T. Yasuda
Kyriakos Axiotis
Gang Fu
M. Bateni
Vahab Mirrokni
47
0
0
27 Feb 2024
DTMM: Deploying TinyML Models on Extremely Weak IoT Devices with Pruning
DTMM: Deploying TinyML Models on Extremely Weak IoT Devices with Pruning
Lixiang Han
Zhen Xiao
Zhenjiang Li
41
5
0
17 Jan 2024
Stochastic Subnetwork Annealing: A Regularization Technique for Fine
  Tuning Pruned Subnetworks
Stochastic Subnetwork Annealing: A Regularization Technique for Fine Tuning Pruned Subnetworks
Tim Whitaker
Darrell Whitley
33
0
0
16 Jan 2024
GD doesn't make the cut: Three ways that non-differentiability affects
  neural network training
GD doesn't make the cut: Three ways that non-differentiability affects neural network training
Siddharth Krishna Kumar
AAML
18
2
0
16 Jan 2024
MaxQ: Multi-Axis Query for N:M Sparsity Network
MaxQ: Multi-Axis Query for N:M Sparsity Network
Jingyang Xiang
Siqi Li
Junhao Chen
Zhuangzhi Chen
Tianxin Huang
Linpeng Peng
Yong-Jin Liu
16
0
0
12 Dec 2023
Towards Sobolev Pruning
Towards Sobolev Pruning
Neil Kichler
Sher Afghan
U. Naumann
18
0
0
06 Dec 2023
Pursing the Sparse Limitation of Spiking Deep Learning Structures
Pursing the Sparse Limitation of Spiking Deep Learning Structures
Hao-Ran Cheng
Jiahang Cao
Erjia Xiao
Mengshu Sun
Le Yang
Jize Zhang
Xue Lin
B. Kailkhura
Kaidi Xu
Renjing Xu
16
1
0
18 Nov 2023
Statistical learning by sparse deep neural networks
Statistical learning by sparse deep neural networks
Felix Abramovich
BDL
24
1
0
15 Nov 2023
Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs
Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs
Yuxin Zhang
Lirui Zhao
Mingbao Lin
Yunyun Sun
Yiwu Yao
Xingjia Han
Jared Tanner
Shiwei Liu
Rongrong Ji
SyDa
45
40
0
13 Oct 2023
Filter Pruning For CNN With Enhanced Linear Representation Redundancy
Filter Pruning For CNN With Enhanced Linear Representation Redundancy
Bojue Wang
Chun-Xia Ma
Bin Liu
Nianbo Liu
Jinqi Zhu
29
1
0
10 Oct 2023
CAIT: Triple-Win Compression towards High Accuracy, Fast Inference, and
  Favorable Transferability For ViTs
CAIT: Triple-Win Compression towards High Accuracy, Fast Inference, and Favorable Transferability For ViTs
Ao Wang
Hui Chen
Zijia Lin
Sicheng Zhao
J. Han
Guiguang Ding
ViT
34
6
0
27 Sep 2023
Maestro: Uncovering Low-Rank Structures via Trainable Decomposition
Maestro: Uncovering Low-Rank Structures via Trainable Decomposition
Samuel Horváth
Stefanos Laskaridis
Shashank Rajput
Hongyi Wang
BDL
32
4
0
28 Aug 2023
Approximate Computing Survey, Part II: Application-Specific & Architectural Approximation Techniques and Applications
Approximate Computing Survey, Part II: Application-Specific & Architectural Approximation Techniques and Applications
Vasileios Leon
Muhammad Abdullah Hanif
Giorgos Armeniakos
Xun Jiao
Muhammad Shafique
K. Pekmestzi
Dimitrios Soudris
37
3
0
20 Jul 2023
Integrated multi-operand optical neurons for scalable and
  hardware-efficient deep learning
Integrated multi-operand optical neurons for scalable and hardware-efficient deep learning
Chenghao Feng
Jiaqi Gu
Hanqing Zhu
R. Tang
Shupeng Ning
M. Hlaing
J. Midkiff
Sourabh Jain
David Z. Pan
Ray T. Chen
28
8
0
31 May 2023
Masked Bayesian Neural Networks : Theoretical Guarantee and its
  Posterior Inference
Masked Bayesian Neural Networks : Theoretical Guarantee and its Posterior Inference
Insung Kong
Dongyoon Yang
Jongjin Lee
Ilsang Ohn
Gyuseung Baek
Yongdai Kim
BDL
31
4
0
24 May 2023
SFP: Spurious Feature-targeted Pruning for Out-of-Distribution
  Generalization
SFP: Spurious Feature-targeted Pruning for Out-of-Distribution Generalization
Yingchun Wang
Jingcai Guo
Yi Liu
Song Guo
Weizhan Zhang
Xiangyong Cao
Qinghua Zheng
AAML
OODD
31
11
0
19 May 2023
SPADE: Sparse Pillar-based 3D Object Detection Accelerator for
  Autonomous Driving
SPADE: Sparse Pillar-based 3D Object Detection Accelerator for Autonomous Driving
Minjae Lee
Seongmin Park
Hyung-Se Kim
Minyong Yoon
Jangwhan Lee
Junwon Choi
Nam Sung Kim
Mingu Kang
Jungwook Choi
3DPC
26
4
0
12 May 2023
Cuttlefish: Low-Rank Model Training without All the Tuning
Cuttlefish: Low-Rank Model Training without All the Tuning
Hongyi Wang
Saurabh Agarwal
Pongsakorn U-chupala
Yoshiki Tanaka
Eric P. Xing
Dimitris Papailiopoulos
OffRL
56
22
0
04 May 2023
Training Large Language Models Efficiently with Sparsity and Dataflow
Training Large Language Models Efficiently with Sparsity and Dataflow
V. Srinivasan
Darshan Gandhi
Urmish Thakker
R. Prabhakar
MoE
35
6
0
11 Apr 2023
Surrogate Lagrangian Relaxation: A Path To Retrain-free Deep Neural
  Network Pruning
Surrogate Lagrangian Relaxation: A Path To Retrain-free Deep Neural Network Pruning
Shangli Zhou
Mikhail A. Bragin
Lynn Pepin
Deniz Gurevin
Fei Miao
Caiwen Ding
18
3
0
08 Apr 2023
NTK-SAP: Improving neural network pruning by aligning training dynamics
NTK-SAP: Improving neural network pruning by aligning training dynamics
Yite Wang
Dawei Li
Ruoyu Sun
39
19
0
06 Apr 2023
Physics-aware Roughness Optimization for Diffractive Optical Neural
  Networks
Physics-aware Roughness Optimization for Diffractive Optical Neural Networks
Shangli Zhou
Yingjie Li
Minhan Lou
Weilu Gao
Zhijie Shi
Cunxi Yu
Caiwen Ding
33
2
0
04 Apr 2023
Optimizing data-flow in Binary Neural Networks
Optimizing data-flow in Binary Neural Networks
Lorenzo Vorabbi
Davide Maltoni
Stefano Santi
MQ
22
5
0
03 Apr 2023
SwiftFormer: Efficient Additive Attention for Transformer-based
  Real-time Mobile Vision Applications
SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications
Abdelrahman M. Shaker
Muhammad Maaz
H. Rasheed
Salman Khan
Ming Yang
Fahad Shahbaz Khan
ViT
50
84
0
27 Mar 2023
Energy-efficient Task Adaptation for NLP Edge Inference Leveraging
  Heterogeneous Memory Architectures
Energy-efficient Task Adaptation for NLP Edge Inference Leveraging Heterogeneous Memory Architectures
Zirui Fu
Aleksandre Avaliani
M. Donato
44
1
0
25 Mar 2023
PowerPruning: Selecting Weights and Activations for Power-Efficient
  Neural Network Acceleration
PowerPruning: Selecting Weights and Activations for Power-Efficient Neural Network Acceleration
Richard Petri
Grace Li Zhang
Yiran Chen
Ulf Schlichtmann
Bing Li
16
6
0
24 Mar 2023
Towards a Smaller Student: Capacity Dynamic Distillation for Efficient
  Image Retrieval
Towards a Smaller Student: Capacity Dynamic Distillation for Efficient Image Retrieval
Yi Xie
Huaidong Zhang
Xuemiao Xu
Jianqing Zhu
Shengfeng He
VLM
21
13
0
16 Mar 2023
On Model Compression for Neural Networks: Framework, Algorithm, and
  Convergence Guarantee
On Model Compression for Neural Networks: Framework, Algorithm, and Convergence Guarantee
Chenyang Li
Jihoon Chung
Mengnan Du
Haimin Wang
Xianlian Zhou
Bohao Shen
33
1
0
13 Mar 2023
Balanced Training for Sparse GANs
Balanced Training for Sparse GANs
Yite Wang
Jing Wu
N. Hovakimyan
Ruoyu Sun
48
9
0
28 Feb 2023
Model-based feature selection for neural networks: A mixed-integer
  programming approach
Model-based feature selection for neural networks: A mixed-integer programming approach
Shudian Zhao
Calvin Tsay
Jan Kronqvist
40
5
0
20 Feb 2023
1234567
Next