ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.00442
  4. Cited By
FLatten Transformer: Vision Transformer using Focused Linear Attention

FLatten Transformer: Vision Transformer using Focused Linear Attention

1 August 2023
Dongchen Han
Xuran Pan
Yizeng Han
Shiji Song
Gao Huang
ArXivPDFHTML

Papers citing "FLatten Transformer: Vision Transformer using Focused Linear Attention"

33 / 83 papers shown
Title
Sub-Adjacent Transformer: Improving Time Series Anomaly Detection with
  Reconstruction Error from Sub-Adjacent Neighborhoods
Sub-Adjacent Transformer: Improving Time Series Anomaly Detection with Reconstruction Error from Sub-Adjacent Neighborhoods
Wenzhen Yue
Xianghua Ying
Ruohao Guo
DongDong Chen
Ji Shi
Bowei Xing
Yuqing Zhu
Taiyan Chen
AI4TS
39
3
0
27 Apr 2024
Data-independent Module-aware Pruning for Hierarchical Vision
  Transformers
Data-independent Module-aware Pruning for Hierarchical Vision Transformers
Yang He
Qiufeng Wang
ViT
55
3
0
21 Apr 2024
SmartMem: Layout Transformation Elimination and Adaptation for Efficient
  DNN Execution on Mobile
SmartMem: Layout Transformation Elimination and Adaptation for Efficient DNN Execution on Mobile
Wei Niu
Md. Musfiqur Rahman Sanim
Zhihao Shu
Jiexiong Guan
Xipeng Shen
Miao Yin
Gagan Agrawal
Bin Ren
45
6
0
21 Apr 2024
Referring Flexible Image Restoration
Referring Flexible Image Restoration
Runwei Guan
Rongsheng Hu
Zhuhao Zhou
Tianlang Xue
Ka Lok Man
Jeremy S. Smith
Eng Gee Lim
Weiping Ding
Yutao Yue
39
0
0
16 Apr 2024
State Space Model for New-Generation Network Alternative to
  Transformers: A Survey
State Space Model for New-Generation Network Alternative to Transformers: A Survey
Tianlin Li
Shiao Wang
Yuhe Ding
Yuehang Li
Wentao Wu
...
Bowei Jiang
Chenglong Li
Yaowei Wang
Yonghong Tian
Jin Tang
Mamba
40
50
0
15 Apr 2024
A Novel State Space Model with Local Enhancement and State Sharing for
  Image Fusion
A Novel State Space Model with Local Enhancement and State Sharing for Image Fusion
Zihan Cao
Xiao Wu
Liang-Jian Deng
Yu Zhong
Mamba
47
9
0
14 Apr 2024
An FPGA-Based Reconfigurable Accelerator for Convolution-Transformer
  Hybrid EfficientViT
An FPGA-Based Reconfigurable Accelerator for Convolution-Transformer Hybrid EfficientViT
Haikuo Shao
Huihong Shi
Wendong Mao
Zhongfeng Wang
39
2
0
29 Mar 2024
Enhancing Efficiency in Vision Transformer Networks: Design Techniques
  and Insights
Enhancing Efficiency in Vision Transformer Networks: Design Techniques and Insights
Moein Heidari
Reza Azad
Sina Ghorbani Kolahi
René Arimond
Leon Niggemeier
...
Afshin Bozorgpour
Ehsan Khodapanah Aghdam
A. Kazerouni
Ilker Hacihaliloglu
Dorit Merhof
62
7
0
28 Mar 2024
Heracles: A Hybrid SSM-Transformer Model for High-Resolution Image and
  Time-Series Analysis
Heracles: A Hybrid SSM-Transformer Model for High-Resolution Image and Time-Series Analysis
Badri N. Patro
Suhas Ranganath
Vinay P. Namboodiri
Vijay Srinivas Agneeswaran
51
2
0
26 Mar 2024
CPA-Enhancer: Chain-of-Thought Prompted Adaptive Enhancer for Object
  Detection under Unknown Degradations
CPA-Enhancer: Chain-of-Thought Prompted Adaptive Enhancer for Object Detection under Unknown Degradations
Yuwei Zhang
Yan Wu
Yanming Liu
Xinyue Peng
61
5
0
17 Mar 2024
ViT-CoMer: Vision Transformer with Convolutional Multi-scale Feature
  Interaction for Dense Predictions
ViT-CoMer: Vision Transformer with Convolutional Multi-scale Feature Interaction for Dense Predictions
Chunlong Xia
Xinliang Wang
Feng Lv
Xin Hao
Yifeng Shi
ViT
39
48
0
12 Mar 2024
HyenaPixel: Global Image Context with Convolutions
HyenaPixel: Global Image Context with Convolutions
Julian Spravil
Sebastian Houben
Sven Behnke
33
1
0
29 Feb 2024
Interactive Multi-Head Self-Attention with Linear Complexity
Interactive Multi-Head Self-Attention with Linear Complexity
Hankyul Kang
Ming-Hsuan Yang
Jongbin Ryu
31
1
0
27 Feb 2024
P-Mamba: Marrying Perona Malik Diffusion with Mamba for Efficient
  Pediatric Echocardiographic Left Ventricular Segmentation
P-Mamba: Marrying Perona Malik Diffusion with Mamba for Efficient Pediatric Echocardiographic Left Ventricular Segmentation
Zi Ye
Tianxiang Chen
Fangyijie Wang
Hanwei Zhang
Lijun Zhang
Mamba
26
20
0
13 Feb 2024
A Survey on Transformer Compression
A Survey on Transformer Compression
Yehui Tang
Yunhe Wang
Jianyuan Guo
Zhijun Tu
Kai Han
Hailin Hu
Dacheng Tao
46
30
0
05 Feb 2024
A Manifold Representation of the Key in Vision Transformers
A Manifold Representation of the Key in Vision Transformers
Li Meng
Morten Goodwin
Anis Yazidi
P. Engelstad
37
0
0
01 Feb 2024
ScatterFormer: Efficient Voxel Transformer with Scattered Linear
  Attention
ScatterFormer: Efficient Voxel Transformer with Scattered Linear Attention
Chenhang He
Ruihuang Li
Guowen Zhang
Lei Zhang
40
5
0
01 Jan 2024
Agent Attention: On the Integration of Softmax and Linear Attention
Agent Attention: On the Integration of Softmax and Linear Attention
Dongchen Han
Tianzhu Ye
Yizeng Han
Zhuofan Xia
Siyuan Pan
Pengfei Wan
Shiji Song
Gao Huang
52
76
0
14 Dec 2023
SAM-6D: Segment Anything Model Meets Zero-Shot 6D Object Pose Estimation
SAM-6D: Segment Anything Model Meets Zero-Shot 6D Object Pose Estimation
Jiehong Lin
Lihua Liu
Dekun Lu
Kui Jia
VLM
39
61
0
27 Nov 2023
SBCFormer: Lightweight Network Capable of Full-size ImageNet
  Classification at 1 FPS on Single Board Computers
SBCFormer: Lightweight Network Capable of Full-size ImageNet Classification at 1 FPS on Single Board Computers
Xiangyong Lu
Masanori Suganuma
Takayuki Okatani
52
10
0
07 Nov 2023
GQKVA: Efficient Pre-training of Transformers by Grouping Queries, Keys,
  and Values
GQKVA: Efficient Pre-training of Transformers by Grouping Queries, Keys, and Values
Farnoosh Javadi
Walid Ahmed
Habib Hajimolahoseini
Foozhan Ataiefard
Mohammad Hassanpour
Saina Asani
Austin Wen
Omar Mohamed Awad
Kangling Liu
Yang Liu
VLM
47
7
0
06 Nov 2023
Transport-Hub-Aware Spatial-Temporal Adaptive Graph Transformer for
  Traffic Flow Prediction
Transport-Hub-Aware Spatial-Temporal Adaptive Graph Transformer for Traffic Flow Prediction
Xiao Xu
Lei Zhang
Bailong Liu
Zhi Liang
Xuefei Zhang
AI4TS
37
1
0
12 Oct 2023
Computation-efficient Deep Learning for Computer Vision: A Survey
Computation-efficient Deep Learning for Computer Vision: A Survey
Yulin Wang
Yizeng Han
Chaofei Wang
Shiji Song
Qi Tian
Gao Huang
VLM
47
20
0
27 Aug 2023
SpectFormer: Frequency and Attention is what you need in a Vision
  Transformer
SpectFormer: Frequency and Attention is what you need in a Vision Transformer
Badri N. Patro
Vinay P. Namboodiri
Vijay Srinivas Agneeswaran
ViT
40
48
0
13 Apr 2023
Deep Incubation: Training Large Models by Divide-and-Conquering
Deep Incubation: Training Large Models by Divide-and-Conquering
Zanlin Ni
Yulin Wang
Jiangwei Yu
Haojun Jiang
Yu Cao
Gao Huang
VLM
35
11
0
08 Dec 2022
Learning to Weight Samples for Dynamic Early-exiting Networks
Learning to Weight Samples for Dynamic Early-exiting Networks
Yizeng Han
Yifan Pu
Zihang Lai
Chaofei Wang
S. Song
Junfen Cao
Wenhui Huang
Chao Deng
Gao Huang
78
54
0
17 Sep 2022
Hydra Attention: Efficient Attention with Many Heads
Hydra Attention: Efficient Attention with Many Heads
Daniel Bolya
Cheng-Yang Fu
Xiaoliang Dai
Peizhao Zhang
Judy Hoffman
101
78
0
15 Sep 2022
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision
  Transformer
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer
Sachin Mehta
Mohammad Rastegari
ViT
218
1,220
0
05 Oct 2021
Mobile-Former: Bridging MobileNet and Transformer
Mobile-Former: Bridging MobileNet and Transformer
Yinpeng Chen
Xiyang Dai
Dongdong Chen
Mengchen Liu
Xiaoyi Dong
Lu Yuan
Zicheng Liu
ViT
183
477
0
12 Aug 2021
CMT: Convolutional Neural Networks Meet Vision Transformers
CMT: Convolutional Neural Networks Meet Vision Transformers
Jianyuan Guo
Kai Han
Han Wu
Yehui Tang
Chunjing Xu
Yunhe Wang
Chang Xu
ViT
361
500
0
13 Jul 2021
Combiner: Full Attention Transformer with Sparse Computation Cost
Combiner: Full Attention Transformer with Sparse Computation Cost
Hongyu Ren
H. Dai
Zihang Dai
Mengjiao Yang
J. Leskovec
Dale Schuurmans
Bo Dai
87
78
0
12 Jul 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction
  without Convolutions
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
316
3,644
0
24 Feb 2021
Semantic Understanding of Scenes through the ADE20K Dataset
Semantic Understanding of Scenes through the ADE20K Dataset
Bolei Zhou
Hang Zhao
Xavier Puig
Tete Xiao
Sanja Fidler
Adela Barriuso
Antonio Torralba
SSeg
253
1,834
0
18 Aug 2016
Previous
12