Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2403.15835
Cited By
Once for Both: Single Stage of Importance and Sparsity Search for Vision Transformer Compression
23 March 2024
Hancheng Ye
Chong Yu
Peng Ye
Renqiu Xia
Yansong Tang
Jiwen Lu
Tao Chen
Bo-Wen Zhang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Once for Both: Single Stage of Importance and Sparsity Search for Vision Transformer Compression"
7 / 7 papers shown
Title
TokenCarve: Information-Preserving Visual Token Compression in Multimodal Large Language Models
Xudong Tan
Peng Ye
Chongjun Tu
Jianjian Cao
Yaoxin Yang
Lin Zhang
Dongzhan Zhou
Tao Chen
VLM
56
0
0
13 Mar 2025
Efficient Architecture Search via Bi-level Data Pruning
Chongjun Tu
Peng Ye
Weihao Lin
Hancheng Ye
Chong Yu
Tao Chen
Baopu Li
Wanli Ouyang
40
2
0
21 Dec 2023
DepGraph: Towards Any Structural Pruning
Gongfan Fang
Xinyin Ma
Mingli Song
Michael Bi Mi
Xinchao Wang
GNN
91
256
0
30 Jan 2023
Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud Pre-training
Renrui Zhang
Ziyu Guo
Rongyao Fang
Bingyan Zhao
Dong Wang
Yu Qiao
Hongsheng Li
Peng Gao
3DPC
178
244
0
28 May 2022
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
305
7,434
0
11 Nov 2021
Transformer in Transformer
Kai Han
An Xiao
Enhua Wu
Jianyuan Guo
Chunjing Xu
Yunhe Wang
ViT
284
1,524
0
27 Feb 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
277
3,623
0
24 Feb 2021
1