ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1511.06530
  4. Cited By
Compression of Deep Convolutional Neural Networks for Fast and Low Power
  Mobile Applications

Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications

20 November 2015
Yong-Deok Kim
Eunhyeok Park
S. Yoo
Taelim Choi
Lu Yang
Dongjun Shin
ArXivPDFHTML

Papers citing "Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications"

50 / 145 papers shown
Title
Forget the Data and Fine-Tuning! Just Fold the Network to Compress
Forget the Data and Fine-Tuning! Just Fold the Network to Compress
Dong Wang
Haris Šikić
Lothar Thiele
O. Saukh
68
0
0
17 Feb 2025
Causal Deep Learning
Causal Deep Learning
M. Alex O. Vasilescu
CML
67
2
1
03 Jan 2025
Task Singular Vectors: Reducing Task Interference in Model Merging
Task Singular Vectors: Reducing Task Interference in Model Merging
Antonio Andrea Gargiulo
Donato Crisostomi
Maria Sofia Bucarelli
Simone Scardapane
Fabrizio Silvestri
Emanuele Rodolà
MoMe
95
9
0
26 Nov 2024
Efficient Source-Free Time-Series Adaptation via Parameter Subspace Disentanglement
Efficient Source-Free Time-Series Adaptation via Parameter Subspace Disentanglement
Gaurav Patel
Christopher Sandino
Behrooz Mahasseni
Ellen L. Zippi
Erdrin Azemi
Ali Moin
Juri Minxha
TTA
AI4TS
55
3
0
03 Oct 2024
Reweighted Solutions for Weighted Low Rank Approximation
Reweighted Solutions for Weighted Low Rank Approximation
David P. Woodruff
T. Yasuda
42
1
0
04 Jun 2024
Post-Training Network Compression for 3D Medical Image Segmentation:
  Reducing Computational Efforts via Tucker Decomposition
Post-Training Network Compression for 3D Medical Image Segmentation: Reducing Computational Efforts via Tucker Decomposition
Tobias Weber
Jakob Dexl
David Rügamer
Michael Ingrisch
MedIm
40
2
0
15 Apr 2024
Convolutional Neural Network Compression via Dynamic Parameter Rank
  Pruning
Convolutional Neural Network Compression via Dynamic Parameter Rank Pruning
Manish Sharma
Jamison Heard
Eli Saber
Panos P. Markopoulos
31
1
0
15 Jan 2024
Adaptive Compression-Aware Split Learning and Inference for Enhanced
  Network Efficiency
Adaptive Compression-Aware Split Learning and Inference for Enhanced Network Efficiency
Akrit Mudvari
Antero Vainio
Iason Ofeidis
Sasu Tarkoma
Leandros Tassiulas
32
3
0
09 Nov 2023
Robust Adversarial Defense by Tensor Factorization
Robust Adversarial Defense by Tensor Factorization
Manish Bhattarai
M. C. Kaymak
Ryan Barron
Ben Nebgen
Kim Ø. Rasmussen
Boian Alexandrov
AAML
27
2
0
03 Sep 2023
Deep learning-based denoising streamed from mobile phones improves
  speech-in-noise understanding for hearing aid users
Deep learning-based denoising streamed from mobile phones improves speech-in-noise understanding for hearing aid users
P. U. Diehl
Hannes Zilly
Felix Sattler
Y. Singer
Kevin Kepp
...
Paul Meyer-Rachner
A. Pudszuhn
V. Hofmann
M. Vormann
Elias Sprengel
37
3
0
22 Aug 2023
Quantization Aware Factorization for Deep Neural Network Compression
Quantization Aware Factorization for Deep Neural Network Compression
Daria Cherniuk
Stanislav Abukhovich
Anh-Huy Phan
Ivan Oseledets
A. Cichocki
Julia Gusak
MQ
28
2
0
08 Aug 2023
Forward and Inverse Approximation Theory for Linear Temporal
  Convolutional Networks
Forward and Inverse Approximation Theory for Linear Temporal Convolutional Networks
Hao Jiang
Qianxiao Li
AI4TS
24
0
0
29 May 2023
COMCAT: Towards Efficient Compression and Customization of
  Attention-Based Vision Models
COMCAT: Towards Efficient Compression and Customization of Attention-Based Vision Models
Jinqi Xiao
Miao Yin
Yu Gong
Xiao Zang
Jian Ren
Bo Yuan
VLM
ViT
45
9
0
26 May 2023
Compressing Neural Networks Using Tensor Networks with Exponentially Fewer Variational Parameters
Compressing Neural Networks Using Tensor Networks with Exponentially Fewer Variational Parameters
Yong Qing
Ke Li
P. Zhou
Shi-Ju Ran
29
6
0
10 May 2023
Low Rank Optimization for Efficient Deep Learning: Making A Balance
  between Compact Architecture and Fast Training
Low Rank Optimization for Efficient Deep Learning: Making A Balance between Compact Architecture and Fast Training
Xinwei Ou
Zhangxin Chen
Ce Zhu
Yipeng Liu
36
4
0
22 Mar 2023
On Model Compression for Neural Networks: Framework, Algorithm, and
  Convergence Guarantee
On Model Compression for Neural Networks: Framework, Algorithm, and Convergence Guarantee
Chenyang Li
Jihoon Chung
Mengnan Du
Haimin Wang
Xianlian Zhou
Bohao Shen
33
1
0
13 Mar 2023
Approximately Optimal Core Shapes for Tensor Decompositions
Approximately Optimal Core Shapes for Tensor Decompositions
Mehrdad Ghadiri
Matthew Fahrbach
Gang Fu
Vahab Mirrokni
26
8
0
08 Feb 2023
Tensor Networks Meet Neural Networks: A Survey and Future Perspectives
Tensor Networks Meet Neural Networks: A Survey and Future Perspectives
Maolin Wang
Yu Pan
Zenglin Xu
Xiangli Yang
Guangxi Li
A. Cichocki
Andrzej Cichocki
61
19
0
22 Jan 2023
HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural
  Networks
HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks
Jinqi Xiao
Chengming Zhang
Yu Gong
Miao Yin
Yang Sui
Lizhi Xiang
Dingwen Tao
Bo Yuan
29
19
0
20 Jan 2023
GOHSP: A Unified Framework of Graph and Optimization-based Heterogeneous
  Structured Pruning for Vision Transformer
GOHSP: A Unified Framework of Graph and Optimization-based Heterogeneous Structured Pruning for Vision Transformer
Miao Yin
Burak Uzkent
Yilin Shen
Hongxia Jin
Bo Yuan
ViT
32
13
0
13 Jan 2023
CSTAR: Towards Compact and STructured Deep Neural Networks with
  Adversarial Robustness
CSTAR: Towards Compact and STructured Deep Neural Networks with Adversarial Robustness
Huy Phan
Miao Yin
Yang Sui
Bo Yuan
S. Zonouz
AAML
GNN
32
8
0
04 Dec 2022
Towards Practical Control of Singular Values of Convolutional Layers
Towards Practical Control of Singular Values of Convolutional Layers
Alexandra Senderovich
Ekaterina Bulatova
Anton Obukhov
M. Rakhuba
AAML
19
9
0
24 Nov 2022
Pruning Very Deep Neural Network Channels for Efficient Inference
Pruning Very Deep Neural Network Channels for Efficient Inference
Yihui He
35
1
0
14 Nov 2022
TDC: Towards Extremely Efficient CNNs on GPUs via Hardware-Aware Tucker
  Decomposition
TDC: Towards Extremely Efficient CNNs on GPUs via Hardware-Aware Tucker Decomposition
Lizhi Xiang
Miao Yin
Chengming Zhang
Aravind Sukumaran-Rajam
P. Sadayappan
Bo Yuan
Dingwen Tao
3DV
27
8
0
07 Nov 2022
Edge-Cloud Cooperation for DNN Inference via Reinforcement Learning and
  Supervised Learning
Edge-Cloud Cooperation for DNN Inference via Reinforcement Learning and Supervised Learning
Tinghao Zhang
Zhijun Li
Yongrui Chen
Kwok-Yan Lam
Jun Zhao
13
4
0
11 Oct 2022
SVD-NAS: Coupling Low-Rank Approximation and Neural Architecture Search
SVD-NAS: Coupling Low-Rank Approximation and Neural Architecture Search
Zhewen Yu
C. Bouganis
37
4
0
22 Aug 2022
Design Automation for Fast, Lightweight, and Effective Deep Learning
  Models: A Survey
Design Automation for Fast, Lightweight, and Effective Deep Learning Models: A Survey
Dalin Zhang
Kaixuan Chen
Yan Zhao
B. Yang
Li-Ping Yao
Christian S. Jensen
48
3
0
22 Aug 2022
Gator: Customizable Channel Pruning of Neural Networks with Gating
Gator: Customizable Channel Pruning of Neural Networks with Gating
E. Passov
E. David
N. Netanyahu
AAML
45
0
0
30 May 2022
A Unified Weight Initialization Paradigm for Tensorial Convolutional
  Neural Networks
A Unified Weight Initialization Paradigm for Tensorial Convolutional Neural Networks
Yu Pan
Zeyong Su
Ao Liu
Jingquan Wang
Nannan Li
Zenglin Xu
40
11
0
28 May 2022
Compression-aware Training of Neural Networks using Frank-Wolfe
Compression-aware Training of Neural Networks using Frank-Wolfe
Max Zimmer
Christoph Spiegel
Sebastian Pokutta
31
9
0
24 May 2022
OMAD: On-device Mental Anomaly Detection for Substance and Non-Substance
  Users
OMAD: On-device Mental Anomaly Detection for Substance and Non-Substance Users
Emon Dey
Nirmalya Roy
15
5
0
13 Apr 2022
Compressing CNN Kernels for Videos Using Tucker Decompositions: Towards
  Lightweight CNN Applications
Compressing CNN Kernels for Videos Using Tucker Decompositions: Towards Lightweight CNN Applications
Tobias Engelhardt Rasmussen
Line H. Clemmensen
Andreas Baum
35
4
0
10 Mar 2022
Data-Efficient Structured Pruning via Submodular Optimization
Data-Efficient Structured Pruning via Submodular Optimization
Marwa El Halabi
Suraj Srinivas
Simon Lacoste-Julien
22
18
0
09 Mar 2022
Update Compression for Deep Neural Networks on the Edge
Update Compression for Deep Neural Networks on the Edge
Bo Chen
A. Bakhshi
Gustavo E. A. P. A. Batista
Brian Ng
Tat-Jun Chin
31
17
0
09 Mar 2022
Energy awareness in low precision neural networks
Energy awareness in low precision neural networks
Nurit Spingarn-Eliezer
Ron Banner
Elad Hoffer
Hilla Ben-Yaacov
T. Michaeli
41
0
0
06 Feb 2022
LegoDNN: Block-grained Scaling of Deep Neural Networks for Mobile Vision
LegoDNN: Block-grained Scaling of Deep Neural Networks for Mobile Vision
Rui Han
Qinglong Zhang
C. Liu
Guoren Wang
Jian Tang
L. Chen
21
44
0
18 Dec 2021
A New Measure of Model Redundancy for Compressed Convolutional Neural
  Networks
A New Measure of Model Redundancy for Compressed Convolutional Neural Networks
Feiqing Huang
Yuefeng Si
Yao Zheng
Guodong Li
39
1
0
09 Dec 2021
Low-rank Tensor Decomposition for Compression of Convolutional Neural
  Networks Using Funnel Regularization
Low-rank Tensor Decomposition for Compression of Convolutional Neural Networks Using Funnel Regularization
Bo-Shiuan Chu
Che-Rung Lee
26
11
0
07 Dec 2021
Nonlinear Tensor Ring Network
Nonlinear Tensor Ring Network
Xiao Peng Li
Qi Liu
Hayden Kwok-Hay So
19
0
0
12 Nov 2021
Reconstructing Pruned Filters using Cheap Spatial Transformations
Reconstructing Pruned Filters using Cheap Spatial Transformations
Roy Miles
K. Mikolajczyk
29
0
0
25 Oct 2021
Adaptive Distillation: Aggregating Knowledge from Multiple Paths for
  Efficient Distillation
Adaptive Distillation: Aggregating Knowledge from Multiple Paths for Efficient Distillation
Sumanth Chennupati
Mohammad Mahdi Kamani
Zhongwei Cheng
Lin Chen
32
4
0
19 Oct 2021
Neural Network Pruning Through Constrained Reinforcement Learning
Neural Network Pruning Through Constrained Reinforcement Learning
Shehryar Malik
Muhammad Umair Haider
O. Iqbal
M. Taj
35
0
0
16 Oct 2021
Semi-tensor Product-based TensorDecomposition for Neural Network
  Compression
Semi-tensor Product-based TensorDecomposition for Neural Network Compression
Hengling Zhao
Yipeng Liu
Xiaolin Huang
Ce Zhu
47
6
0
30 Sep 2021
Convolutional Neural Network Compression through Generalized Kronecker
  Product Decomposition
Convolutional Neural Network Compression through Generalized Kronecker Product Decomposition
Marawan Gamal Abdel Hameed
Marzieh S. Tahaei
A. Mosleh
V. Nia
47
25
0
29 Sep 2021
Multi-Tensor Network Representation for High-Order Tensor Completion
Multi-Tensor Network Representation for High-Order Tensor Completion
Chang Nie
Huan Wang
Zhihui Lai
17
2
0
09 Sep 2021
Design and Scaffolded Training of an Efficient DNN Operator for Computer
  Vision on the Edge
Design and Scaffolded Training of an Efficient DNN Operator for Computer Vision on the Edge
Vinod Ganesan
Pratyush Kumar
42
2
0
25 Aug 2021
Tensor Yard: One-Shot Algorithm of Hardware-Friendly Tensor-Train
  Decomposition for Convolutional Neural Networks
Tensor Yard: One-Shot Algorithm of Hardware-Friendly Tensor-Train Decomposition for Convolutional Neural Networks
Anuar Taskynov
Vladimir Korviakov
I. Mazurenko
Yepan Xiong
32
2
0
09 Aug 2021
Tensor Methods in Computer Vision and Deep Learning
Tensor Methods in Computer Vision and Deep Learning
Yannis Panagakis
Jean Kossaifi
Grigorios G. Chrysos
James Oldfield
M. Nicolaou
Anima Anandkumar
S. Zafeiriou
34
119
0
07 Jul 2021
Knowledge Distillation via Instance-level Sequence Learning
Knowledge Distillation via Instance-level Sequence Learning
Haoran Zhao
Xin Sun
Junyu Dong
Zihe Dong
Qiong Li
34
23
0
21 Jun 2021
Layer Folding: Neural Network Depth Reduction using Activation
  Linearization
Layer Folding: Neural Network Depth Reduction using Activation Linearization
Amir Ben Dror
Niv Zehngut
Avraham Raviv
E. Artyomov
Ran Vitek
R. Jevnisek
29
20
0
17 Jun 2021
123
Next