ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.12422
  4. Cited By
Towards Efficient Tensor Decomposition-Based DNN Model Compression with
  Optimization Framework

Towards Efficient Tensor Decomposition-Based DNN Model Compression with Optimization Framework

26 July 2021
Miao Yin
Yang Sui
Siyu Liao
Bo Yuan
ArXivPDFHTML

Papers citing "Towards Efficient Tensor Decomposition-Based DNN Model Compression with Optimization Framework"

16 / 16 papers shown
Title
Property Inheritance for Subtensors in Tensor Train Decompositions
Property Inheritance for Subtensors in Tensor Train Decompositions
HanQin Cai
Longxiu Huang
40
0
0
15 Apr 2025
MOGNET: A Mux-residual quantized Network leveraging Online-Generated weights
MOGNET: A Mux-residual quantized Network leveraging Online-Generated weights
Van Thien Nguyen
William Guicquero
Gilles Sicard
MQ
75
1
0
17 Jan 2025
Quantization Aware Factorization for Deep Neural Network Compression
Quantization Aware Factorization for Deep Neural Network Compression
Daria Cherniuk
Stanislav Abukhovich
Anh-Huy Phan
Ivan Oseledets
A. Cichocki
Julia Gusak
MQ
23
2
0
08 Aug 2023
Learning Kernel-Modulated Neural Representation for Efficient Light
  Field Compression
Learning Kernel-Modulated Neural Representation for Efficient Light Field Compression
Jinglei Shi
Yihong Xu
C. Guillemot
27
6
0
12 Jul 2023
COMCAT: Towards Efficient Compression and Customization of
  Attention-Based Vision Models
COMCAT: Towards Efficient Compression and Customization of Attention-Based Vision Models
Jinqi Xiao
Miao Yin
Yu Gong
Xiao Zang
Jian Ren
Bo Yuan
VLM
ViT
43
9
0
26 May 2023
Learning-based Spatial and Angular Information Separation for Light Field Compression
Jinglei Shi
Yihong Xu
C. Guillemot
23
0
0
13 Apr 2023
On Model Compression for Neural Networks: Framework, Algorithm, and
  Convergence Guarantee
On Model Compression for Neural Networks: Framework, Algorithm, and Convergence Guarantee
Chenyang Li
Jihoon Chung
Mengnan Du
Haimin Wang
Xianlian Zhou
Bohao Shen
33
1
0
13 Mar 2023
Tensor Networks Meet Neural Networks: A Survey and Future Perspectives
Tensor Networks Meet Neural Networks: A Survey and Future Perspectives
Maolin Wang
Yu Pan
Zenglin Xu
Xiangli Yang
Guangxi Li
A. Cichocki
Andrzej Cichocki
55
19
0
22 Jan 2023
HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural
  Networks
HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks
Jinqi Xiao
Chengming Zhang
Yu Gong
Miao Yin
Yang Sui
Lizhi Xiang
Dingwen Tao
Bo Yuan
29
19
0
20 Jan 2023
CSTAR: Towards Compact and STructured Deep Neural Networks with
  Adversarial Robustness
CSTAR: Towards Compact and STructured Deep Neural Networks with Adversarial Robustness
Huy Phan
Miao Yin
Yang Sui
Bo Yuan
S. Zonouz
AAML
GNN
32
8
0
04 Dec 2022
TDC: Towards Extremely Efficient CNNs on GPUs via Hardware-Aware Tucker
  Decomposition
TDC: Towards Extremely Efficient CNNs on GPUs via Hardware-Aware Tucker Decomposition
Lizhi Xiang
Miao Yin
Chengming Zhang
Aravind Sukumaran-Rajam
P. Sadayappan
Bo Yuan
Dingwen Tao
3DV
22
8
0
07 Nov 2022
RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact
  DNN
RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact DNN
Huy Phan
Cong Shi
Yi Xie
Tian-Di Zhang
Zhuohang Li
Tianming Zhao
Jian-Dong Liu
Yan Wang
Ying-Cong Chen
Bo Yuan
AAML
32
6
0
22 Aug 2022
SVD-NAS: Coupling Low-Rank Approximation and Neural Architecture Search
SVD-NAS: Coupling Low-Rank Approximation and Neural Architecture Search
Zhewen Yu
C. Bouganis
37
4
0
22 Aug 2022
DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks
Y. Fu
Haichuan Yang
Jiayi Yuan
Meng Li
Cheng Wan
Raghuraman Krishnamoorthi
Vikas Chandra
Yingyan Lin
36
19
0
02 Jun 2022
A Unified Weight Initialization Paradigm for Tensorial Convolutional
  Neural Networks
A Unified Weight Initialization Paradigm for Tensorial Convolutional Neural Networks
Yu Pan
Zeyong Su
Ao Liu
Jingquan Wang
Nannan Li
Zenglin Xu
40
11
0
28 May 2022
CHIP: CHannel Independence-based Pruning for Compact Neural Networks
CHIP: CHannel Independence-based Pruning for Compact Neural Networks
Yang Sui
Miao Yin
Yi Xie
Huy Phan
S. Zonouz
Bo Yuan
VLM
35
129
0
26 Oct 2021
1