ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1712.05134
  4. Cited By
Learning Compact Recurrent Neural Networks with Block-Term Tensor
  Decomposition

Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition

14 December 2017
Jinmian Ye
Linnan Wang
Guangxi Li
Di Chen
Shandian Zhe
Xinqi Chu
Zenglin Xu
ArXivPDFHTML

Papers citing "Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition"

17 / 17 papers shown
Title
Efficient Source-Free Time-Series Adaptation via Parameter Subspace Disentanglement
Efficient Source-Free Time-Series Adaptation via Parameter Subspace Disentanglement
Gaurav Patel
Christopher Sandino
Behrooz Mahasseni
Ellen L. Zippi
Erdrin Azemi
Ali Moin
Juri Minxha
TTA
AI4TS
50
3
0
03 Oct 2024
A Tensor Decomposition Perspective on Second-order RNNs
A Tensor Decomposition Perspective on Second-order RNNs
M. Lizaire
Michael Rizvi-Martel
Marawan Gamal Abdel Hameed
Guillaume Rabusseau
52
0
0
07 Jun 2024
Tensor Networks Meet Neural Networks: A Survey and Future Perspectives
Tensor Networks Meet Neural Networks: A Survey and Future Perspectives
Maolin Wang
Y. Pan
Zenglin Xu
Xiangli Yang
Guangxi Li
A. Cichocki
Andrzej Cichocki
55
19
0
22 Jan 2023
Algorithm and Hardware Co-Design of Energy-Efficient LSTM Networks for
  Video Recognition with Hierarchical Tucker Tensor Decomposition
Algorithm and Hardware Co-Design of Energy-Efficient LSTM Networks for Video Recognition with Hierarchical Tucker Tensor Decomposition
Yu Gong
Miao Yin
Lingyi Huang
Chunhua Deng
Yang Sui
Bo Yuan
24
6
0
05 Dec 2022
CSTAR: Towards Compact and STructured Deep Neural Networks with
  Adversarial Robustness
CSTAR: Towards Compact and STructured Deep Neural Networks with Adversarial Robustness
Huy Phan
Miao Yin
Yang Sui
Bo Yuan
S. Zonouz
AAML
GNN
32
8
0
04 Dec 2022
Design Automation for Fast, Lightweight, and Effective Deep Learning
  Models: A Survey
Design Automation for Fast, Lightweight, and Effective Deep Learning Models: A Survey
Dalin Zhang
Kaixuan Chen
Yan Zhao
B. Yang
Li-Ping Yao
Christian S. Jensen
46
3
0
22 Aug 2022
A Unified Weight Initialization Paradigm for Tensorial Convolutional
  Neural Networks
A Unified Weight Initialization Paradigm for Tensorial Convolutional Neural Networks
Y. Pan
Zeyong Su
Ao Liu
Jingquan Wang
Nannan Li
Zenglin Xu
40
11
0
28 May 2022
TensoRF: Tensorial Radiance Fields
TensoRF: Tensorial Radiance Fields
Anpei Chen
Zexiang Xu
Andreas Geiger
Jingyi Yu
Hao Su
38
1,249
0
17 Mar 2022
More Efficient Sampling for Tensor Decomposition With Worst-Case
  Guarantees
More Efficient Sampling for Tensor Decomposition With Worst-Case Guarantees
Osman Asif Malik
36
14
0
14 Oct 2021
Semi-tensor Product-based TensorDecomposition for Neural Network
  Compression
Semi-tensor Product-based TensorDecomposition for Neural Network Compression
Hengling Zhao
Yipeng Liu
Xiaolin Huang
Ce Zhu
47
6
0
30 Sep 2021
Block-term Tensor Neural Networks
Block-term Tensor Neural Networks
Jinmian Ye
Guangxi Li
Di Chen
Haiqin Yang
Shandian Zhe
Zenglin Xu
24
30
0
10 Oct 2020
A Variational Information Bottleneck Based Method to Compress Sequential
  Networks for Human Action Recognition
A Variational Information Bottleneck Based Method to Compress Sequential Networks for Human Action Recognition
Ayush Srivastava
Oshin Dutta
A. Prathosh
Sumeet Agarwal
Jigyasa Gupta
12
8
0
03 Oct 2020
Sparse Linear Networks with a Fixed Butterfly Structure: Theory and
  Practice
Sparse Linear Networks with a Fixed Butterfly Structure: Theory and Practice
Nir Ailon
Omer Leibovitch
Vineet Nair
15
14
0
17 Jul 2020
An Overview of Neural Network Compression
An Overview of Neural Network Compression
James OÑeill
AI4CE
45
98
0
05 Jun 2020
Compressing Recurrent Neural Networks Using Hierarchical Tucker Tensor
  Decomposition
Compressing Recurrent Neural Networks Using Hierarchical Tucker Tensor Decomposition
Miao Yin
Siyu Liao
Xiao-Yang Liu
Xiaodong Wang
Bo Yuan
40
24
0
09 May 2020
Gate Decorator: Global Filter Pruning Method for Accelerating Deep
  Convolutional Neural Networks
Gate Decorator: Global Filter Pruning Method for Accelerating Deep Convolutional Neural Networks
Zhonghui You
Kun Yan
Jinmian Ye
Meng Ma
Ping Wang
3DPC
16
246
0
18 Sep 2019
Compressing Recurrent Neural Networks with Tensor Ring for Action
  Recognition
Compressing Recurrent Neural Networks with Tensor Ring for Action Recognition
Y. Pan
Jing Xu
Maolin Wang
Jinmian Ye
Fei Wang
Kun Bai
Zenglin Xu
MQ
10
104
0
19 Nov 2018
1