Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2505.18413
Cited By
LatentLLM: Attention-Aware Joint Tensor Compression
23 May 2025
T. Koike-Akino
Xiangyu Chen
Jing Liu
Ye Wang
Wang
Matthew Brand
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"LatentLLM: Attention-Aware Joint Tensor Compression"
2 / 2 papers shown
Title
Q-VLM: Post-training Quantization for Large Vision-Language Models
Changyuan Wang
Ziwei Wang
Xiuwei Xu
Yansong Tang
Jie Zhou
Jiwen Lu
MQ
113
7
0
10 Oct 2024
CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for Task-Aware Parameter-Efficient Fine-tuning
Yibo Yang
Xiaojie Li
Zhongzhu Zhou
Shuaiwen Leon Song
Jianlong Wu
Liqiang Nie
Guohao Li
110
14
0
07 Jun 2024
1