ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.18413
29
0

LatentLLM: Attention-Aware Joint Tensor Compression

23 May 2025
T. Koike-Akino
Xiangyu Chen
Jing Liu
Ye Wang
Wang
Matthew Brand
ArXiv (abs)PDFHTML
Main:8 Pages
19 Figures
Bibliography:1 Pages
7 Tables
Appendix:28 Pages
Abstract

Modern foundation models such as large language models (LLMs) and large multi-modal models (LMMs) require a massive amount of computational and memory resources. We propose a new framework to convert such LLMs/LMMs into a reduced-dimension latent structure. Our method extends a local activation-aware tensor decomposition to a global attention-aware joint tensor de-composition. Our framework can significantly improve the model accuracy over the existing model compression methods when reducing the latent dimension to realize computationally/memory-efficient LLMs/LLMs. We show the benefit on several benchmark including multi-modal reasoning tasks.

View on arXiv
@article{koike-akino2025_2505.18413,
  title={ LatentLLM: Attention-Aware Joint Tensor Compression },
  author={ Toshiaki Koike-Akino and Xiangyu Chen and Jing Liu and Ye Wang and Wang and Matthew Brand },
  journal={arXiv preprint arXiv:2505.18413},
  year={ 2025 }
}
Comments on this paper