ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.09113
  4. Cited By
M$^2$-ViT: Accelerating Hybrid Vision Transformers with Two-Level Mixed
  Quantization

M2^22-ViT: Accelerating Hybrid Vision Transformers with Two-Level Mixed Quantization

10 October 2024
Yanbiao Liang
Huihong Shi
Zhongfeng Wang
    MQ
ArXivPDFHTML

Papers citing "M$^2$-ViT: Accelerating Hybrid Vision Transformers with Two-Level Mixed Quantization"

12 / 12 papers shown
Title
ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision
  Transformer Acceleration with a Linear Taylor Attention
ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention
Jyotikrishna Dass
Shang Wu
Huihong Shi
Chaojian Li
Zhifan Ye
Zhongfeng Wang
Yingyan Lin
40
54
0
09 Nov 2022
ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design
Haoran You
Zhanyi Sun
Huihong Shi
Zhongzhi Yu
Yang Zhao
Yongan Zhang
Chaojian Li
Baopu Li
Yingyan Lin
ViT
60
83
0
18 Oct 2022
Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision
  Transformer with Mixed-Scheme Quantization
Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization
Zechao Li
Mengshu Sun
Alec Lu
Haoyu Ma
Geng Yuan
...
Yanyu Li
M. Leeser
Zhangyang Wang
Xue Lin
Zhenman Fang
ViT
MQ
42
54
0
10 Aug 2022
I-ViT: Integer-only Quantization for Efficient Vision Transformer
  Inference
I-ViT: Integer-only Quantization for Efficient Vision Transformer Inference
Zhikai Li
Qingyi Gu
MQ
92
103
0
04 Jul 2022
Row-wise Accelerator for Vision Transformer
Row-wise Accelerator for Vision Transformer
Hong-Yi Wang
Tian-Sheuan Chang
55
16
0
09 May 2022
FQ-ViT: Post-Training Quantization for Fully Quantized Vision
  Transformer
FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
Yang Lin
Tianyu Zhang
Peiqin Sun
Zheng Li
Shuchang Zhou
ViT
MQ
50
154
0
27 Nov 2021
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Ze Liu
Yutong Lin
Yue Cao
Han Hu
Yixuan Wei
Zheng Zhang
Stephen Lin
B. Guo
ViT
402
21,347
0
25 Mar 2021
Training data-efficient image transformers & distillation through
  attention
Training data-efficient image transformers & distillation through attention
Hugo Touvron
Matthieu Cord
Matthijs Douze
Francisco Massa
Alexandre Sablayrolles
Hervé Jégou
ViT
345
6,731
0
23 Dec 2020
An Image is Worth 16x16 Words: Transformers for Image Recognition at
  Scale
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
...
Matthias Minderer
G. Heigold
Sylvain Gelly
Jakob Uszkoreit
N. Houlsby
ViT
530
40,739
0
22 Oct 2020
Additive Powers-of-Two Quantization: An Efficient Non-uniform
  Discretization for Neural Networks
Additive Powers-of-Two Quantization: An Efficient Non-uniform Discretization for Neural Networks
Yuhang Li
Xin Dong
Wei Wang
MQ
60
258
0
28 Sep 2019
MobileNetV2: Inverted Residuals and Linear Bottlenecks
MobileNetV2: Inverted Residuals and Linear Bottlenecks
Mark Sandler
Andrew G. Howard
Menglong Zhu
A. Zhmoginov
Liang-Chieh Chen
169
19,204
0
13 Jan 2018
Semantic Understanding of Scenes through the ADE20K Dataset
Semantic Understanding of Scenes through the ADE20K Dataset
Bolei Zhou
Hang Zhao
Xavier Puig
Tete Xiao
Sanja Fidler
Adela Barriuso
Antonio Torralba
SSeg
370
1,865
0
18 Aug 2016
1