ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.07671
  4. Cited By
ADA-Tucker: Compressing Deep Neural Networks via Adaptive Dimension
  Adjustment Tucker Decomposition

ADA-Tucker: Compressing Deep Neural Networks via Adaptive Dimension Adjustment Tucker Decomposition

18 June 2019
Zhisheng Zhong
Fangyin Wei
Zhouchen Lin
Chao Zhang
ArXivPDFHTML

Papers citing "ADA-Tucker: Compressing Deep Neural Networks via Adaptive Dimension Adjustment Tucker Decomposition"

4 / 4 papers shown
Title
Is Attention Better Than Matrix Decomposition?
Is Attention Better Than Matrix Decomposition?
Zhengyang Geng
Meng-Hao Guo
Hongxu Chen
Xia Li
Ke Wei
Zhouchen Lin
62
137
0
09 Sep 2021
Post-Training Quantization for Vision Transformer
Post-Training Quantization for Vision Transformer
Zhenhua Liu
Yunhe Wang
Kai Han
Siwei Ma
Wen Gao
ViT
MQ
56
326
0
27 Jun 2021
Tensor-Train Recurrent Neural Networks for Interpretable Multi-Way
  Financial Forecasting
Tensor-Train Recurrent Neural Networks for Interpretable Multi-Way Financial Forecasting
Y. Xu
G. G. Calvi
Danilo P. Mandic
AI4TS
19
11
0
11 May 2021
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
337
1,049
0
10 Feb 2017
1