ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.08110
  4. Cited By
HeatViT: Hardware-Efficient Adaptive Token Pruning for Vision
  Transformers

HeatViT: Hardware-Efficient Adaptive Token Pruning for Vision Transformers

15 November 2022
Peiyan Dong
Mengshu Sun
Alec Lu
Yanyue Xie
Li-Yu Daisy Liu
Zhenglun Kong
Xin Meng
Z. Li
Xue Lin
Zhenman Fang
Yanzhi Wang
    ViT
ArXivPDFHTML

Papers citing "HeatViT: Hardware-Efficient Adaptive Token Pruning for Vision Transformers"

28 / 28 papers shown
Title
Image Recognition with Online Lightweight Vision Transformer: A Survey
Image Recognition with Online Lightweight Vision Transformer: A Survey
Zherui Zhang
Rongtao Xu
Jie Zhou
Changwei Wang
Xingtian Pei
...
Jiguang Zhang
Li Guo
Longxiang Gao
W. Xu
Shibiao Xu
ViT
145
0
0
06 May 2025
When Large Vision-Language Model Meets Large Remote Sensing Imagery: Coarse-to-Fine Text-Guided Token Pruning
When Large Vision-Language Model Meets Large Remote Sensing Imagery: Coarse-to-Fine Text-Guided Token Pruning
Junwei Luo
Yingying Zhang
X. J. Yang
Kang Wu
Qi Zhu
Lei Liang
Jingdong Chen
Yansheng Li
67
0
0
10 Mar 2025
T-REX: A 68-567 μs/token, 0.41-3.95 μJ/token Transformer Accelerator with Reduced External Memory Access and Enhanced Hardware Utilization in 16nm FinFET
Seunghyun Moon
Mao Li
Gregory K. Chen
Phil Knag
R. Krishnamurthy
Mingoo Seok
24
0
0
01 Mar 2025
PAPI: Exploiting Dynamic Parallelism in Large Language Model Decoding with a Processing-In-Memory-Enabled Computing System
PAPI: Exploiting Dynamic Parallelism in Large Language Model Decoding with a Processing-In-Memory-Enabled Computing System
Yintao He
Haiyu Mao
Christina Giannoula
Mohammad Sadrosadati
Juan Gómez Luna
Huawei Li
Xiaowei Li
Ying Wang
O. Mutlu
41
5
0
21 Feb 2025
Deploying Foundation Model Powered Agent Services: A Survey
Deploying Foundation Model Powered Agent Services: A Survey
Wenchao Xu
Jinyu Chen
Peirong Zheng
Xiaoquan Yi
Tianyi Tian
...
Quan Wan
Haozhao Wang
Yunfeng Fan
Qinliang Su
Xuemin Shen
AI4CE
119
1
0
18 Dec 2024
AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and
  Pruning
AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning
Yiwu Zhong
Zhuoming Liu
Yin Li
Liwei Wang
82
2
0
04 Dec 2024
Token Cropr: Faster ViTs for Quite a Few Tasks
Token Cropr: Faster ViTs for Quite a Few Tasks
Benjamin Bergner
C. Lippert
Aravindh Mahendran
ViT
VLM
69
0
0
01 Dec 2024
Exploring Token Pruning in Vision State Space Models
Exploring Token Pruning in Vision State Space Models
Zheng Zhan
Zhenglun Kong
Yifan Gong
Yushu Wu
Zichong Meng
...
Xuan Shen
Stratis Ioannidis
Wei Niu
Pu Zhao
Yanzhi Wang
32
9
0
27 Sep 2024
Fit and Prune: Fast and Training-free Visual Token Pruning for
  Multi-modal Large Language Models
Fit and Prune: Fast and Training-free Visual Token Pruning for Multi-modal Large Language Models
Weihao Ye
Qiong Wu
Wenhao Lin
Yiyi Zhou
VLM
41
10
0
16 Sep 2024
TReX- Reusing Vision Transformer's Attention for Efficient Xbar-based
  Computing
TReX- Reusing Vision Transformer's Attention for Efficient Xbar-based Computing
Abhishek Moitra
Abhiroop Bhattacharjee
Youngeun Kim
Priyadarshini Panda
ViT
32
2
0
22 Aug 2024
HG-PIPE: Vision Transformer Acceleration with Hybrid-Grained Pipeline
HG-PIPE: Vision Transformer Acceleration with Hybrid-Grained Pipeline
Qingyu Guo
Jiayong Wan
Songqiang Xu
Meng Li
Yuan Wang
34
1
0
25 Jul 2024
RO-SVD: A Reconfigurable Hardware Copyright Protection Framework for
  AIGC Applications
RO-SVD: A Reconfigurable Hardware Copyright Protection Framework for AIGC Applications
Zhuoheng Ran
Muhammad A. A. Abdelgawad
Zekai Zhang
Ray C. C. Cheung
Hong Yan
25
0
0
17 Jun 2024
P$^2$-ViT: Power-of-Two Post-Training Quantization and Acceleration for
  Fully Quantized Vision Transformer
P2^22-ViT: Power-of-Two Post-Training Quantization and Acceleration for Fully Quantized Vision Transformer
Huihong Shi
Xin Cheng
Wendong Mao
Zhongfeng Wang
MQ
40
3
0
30 May 2024
Accelerating ViT Inference on FPGA through Static and Dynamic Pruning
Accelerating ViT Inference on FPGA through Static and Dynamic Pruning
Dhruv Parikh
Shouyi Li
Bingyi Zhang
Rajgopal Kannan
Carl E. Busart
Viktor Prasanna
40
1
0
21 Mar 2024
LUM-ViT: Learnable Under-sampling Mask Vision Transformer for Bandwidth
  Limited Optical Signal Acquisition
LUM-ViT: Learnable Under-sampling Mask Vision Transformer for Bandwidth Limited Optical Signal Acquisition
Lingfeng Liu
Dong Ni
Hangjie Yuan
ViT
35
0
0
03 Mar 2024
EdgeQAT: Entropy and Distribution Guided Quantization-Aware Training for
  the Acceleration of Lightweight LLMs on the Edge
EdgeQAT: Entropy and Distribution Guided Quantization-Aware Training for the Acceleration of Lightweight LLMs on the Edge
Xuan Shen
Zhenglun Kong
Changdi Yang
Zhaoyang Han
Lei Lu
...
Zhihao Shu
Wei Niu
Miriam Leeser
Pu Zhao
Yanzhi Wang
MQ
51
18
0
16 Feb 2024
SCARIF: Towards Carbon Modeling of Cloud Servers with Accelerators
SCARIF: Towards Carbon Modeling of Cloud Servers with Accelerators
Shixin Ji
Zhuoping Yang
Xingzhen Chen
Stephen Cahoon
Jingtong Hu
Yiyu Shi
Alex K. Jones
Peipei Zhou
18
5
0
11 Jan 2024
Agile-Quant: Activation-Guided Quantization for Faster Inference of LLMs on the Edge
Agile-Quant: Activation-Guided Quantization for Faster Inference of LLMs on the Edge
Xuan Shen
Peiyan Dong
Lei Lu
Zhenglun Kong
Zhengang Li
Ming Lin
Chao Wu
Yanzhi Wang
MQ
39
24
0
09 Dec 2023
A Survey of Techniques for Optimizing Transformer Inference
A Survey of Techniques for Optimizing Transformer Inference
Krishna Teja Chitty-Venkata
Sparsh Mittal
M. Emani
V. Vishwanath
Arun Somani
40
62
0
16 Jul 2023
Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention
  Graph in Pre-Trained Transformers
Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention Graph in Pre-Trained Transformers
Hongjie Wang
Bhishma Dedhia
N. Jha
ViT
VLM
41
26
0
27 May 2023
NeuralMatrix: Compute the Entire Neural Networks with Linear Matrix
  Operations for Efficient Inference
NeuralMatrix: Compute the Entire Neural Networks with Linear Matrix Operations for Efficient Inference
Ruiqi Sun
Siwei Ye
Jie Zhao
Xin He
Yiran Li
An Zou
35
0
0
23 May 2023
Treasure What You Have: Exploiting Similarity in Deep Neural Networks
  for Efficient Video Processing
Treasure What You Have: Exploiting Similarity in Deep Neural Networks for Efficient Video Processing
Hadjer Benmeziane
Halima Bouzidi
Hamza Ouarnoughi
Ozcan Ozturk
Smail Niar
36
0
0
10 May 2023
DeepMAD: Mathematical Architecture Design for Deep Convolutional Neural
  Network
DeepMAD: Mathematical Architecture Design for Deep Convolutional Neural Network
Xuan Shen
Yaohua Wang
Ming Lin
Yi-Li Huang
Hao Tang
Xiuyu Sun
Yanzhi Wang
70
33
0
05 Mar 2023
Transformer in Transformer
Transformer in Transformer
Kai Han
An Xiao
Enhua Wu
Jianyuan Guo
Chunjing Xu
Yunhe Wang
ViT
284
1,524
0
27 Feb 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction
  without Convolutions
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
277
3,623
0
24 Feb 2021
Instance Localization for Self-supervised Detection Pretraining
Instance Localization for Self-supervised Detection Pretraining
Ceyuan Yang
Zhirong Wu
Bolei Zhou
Stephen Lin
ViT
SSL
100
145
0
16 Feb 2021
I-BERT: Integer-only BERT Quantization
I-BERT: Integer-only BERT Quantization
Sehoon Kim
A. Gholami
Z. Yao
Michael W. Mahoney
Kurt Keutzer
MQ
96
341
0
05 Jan 2021
A Decomposable Attention Model for Natural Language Inference
A Decomposable Attention Model for Natural Language Inference
Ankur P. Parikh
Oscar Täckström
Dipanjan Das
Jakob Uszkoreit
201
1,367
0
06 Jun 2016
1