Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2206.08898
Cited By
SimA: Simple Softmax-free Attention for Vision Transformers
17 June 2022
Soroush Abbasi Koohpayegani
Hamed Pirsiavash
Re-assign community
ArXiv
PDF
HTML
Papers citing
"SimA: Simple Softmax-free Attention for Vision Transformers"
25 / 25 papers shown
Title
A Low-Power Streaming Speech Enhancement Accelerator For Edge Devices
Ci-Hao Wu
Tian-Sheuan Chang
61
1
0
27 Mar 2025
Rethinking Attention: Polynomial Alternatives to Softmax in Transformers
Hemanth Saratchandran
Jianqiao Zheng
Yiping Ji
Wenbo Zhang
Simon Lucey
31
4
0
24 Oct 2024
Attention is a smoothed cubic spline
Zehua Lai
Lek-Heng Lim
Yucong Liu
34
2
0
19 Aug 2024
PADRe: A Unifying Polynomial Attention Drop-in Replacement for Efficient Vision Transformer
Pierre-David Létourneau
Manish Kumar Singh
Hsin-Pai Cheng
Shizhong Han
Yunxiao Shi
Dalton Jones
M. H. Langston
Hong Cai
Fatih Porikli
39
0
0
16 Jul 2024
Fake News Detection and Manipulation Reasoning via Large Vision-Language Models
Ruihan Jin
Ruibo Fu
Zhengqi Wen
Shuai Zhang
Yukun Liu
Jianhua Tao
45
5
0
02 Jul 2024
ToSA: Token Selective Attention for Efficient Vision Transformers
Manish Kumar Singh
R. Yasarla
Hong Cai
Mingu Lee
Fatih Porikli
62
0
0
13 Jun 2024
Compute-Efficient Medical Image Classification with Softmax-Free Transformers and Sequence Normalization
Firas Khader
Omar S. M. El Nahhas
T. Han
Gustav Muller-Franzes
S. Nebelung
Jakob Nikolas Kather
Daniel Truhn
MedIm
35
0
0
03 Jun 2024
SpiralMLP: A Lightweight Vision MLP Architecture
Haojie Mu
Burhan Ul Tayyab
Nicholas Chua
43
0
0
31 Mar 2024
RAG-Driver: Generalisable Driving Explanations with Retrieval-Augmented In-Context Learning in Multi-Modal Large Language Model
Jianhao Yuan
Shuyang Sun
Daniel Omeiza
Bo Zhao
Paul Newman
Lars Kunze
Matthew Gadd
LRM
36
48
0
16 Feb 2024
Beyond the Limits: A Survey of Techniques to Extend the Context Length in Large Language Models
Xindi Wang
Mahsa Salmani
Parsa Omidi
Xiangyu Ren
Mehdi Rezagholizadeh
A. Eshaghi
LRM
34
35
0
03 Feb 2024
SlowFormer: Universal Adversarial Patch for Attack on Compute and Energy Efficiency of Inference Efficient Vision Transformers
K. Navaneet
Soroush Abbasi Koohpayegani
Essam Sleiman
Hamed Pirsiavash
AAML
ViT
13
1
0
04 Oct 2023
Replacing softmax with ReLU in Vision Transformers
Mitchell Wortsman
Jaehoon Lee
Justin Gilmer
Simon Kornblith
ViT
30
33
0
15 Sep 2023
A survey on efficient vision transformers: algorithms, techniques, and performance benchmarking
Lorenzo Papa
Paolo Russo
Irene Amerini
Luping Zhou
33
43
0
05 Sep 2023
When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future Directions
Weiming Zhuang
Chen Chen
Lingjuan Lyu
Chong Chen
Yaochu Jin
Lingjuan Lyu
AIFin
AI4CE
99
85
0
27 Jun 2023
A Convolutional Vision Transformer for Semantic Segmentation of Side-Scan Sonar Data
Hayat Rajani
N. Gracias
Rafael García
ViT
27
12
0
24 Feb 2023
Accumulated Trivial Attention Matters in Vision Transformers on Small Datasets
Xiangyu Chen
Qinghao Hu
Kaidong Li
Cuncong Zhong
Guanghui Wang
ViT
38
11
0
22 Oct 2022
DARTFormer: Finding The Best Type Of Attention
Jason Brown
Yiren Zhao
Ilia Shumailov
Robert D. Mullins
22
6
0
02 Oct 2022
On The Computational Complexity of Self-Attention
Feyza Duman Keles
Pruthuvi Maheshakya Wijewardena
C. Hegde
73
109
0
11 Sep 2022
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
308
7,457
0
11 Nov 2021
Token Pooling in Vision Transformers
D. Marin
Jen-Hao Rick Chang
Anurag Ranjan
Anish K. Prabhu
Mohammad Rastegari
Oncel Tuzel
ViT
76
66
0
08 Oct 2021
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer
Sachin Mehta
Mohammad Rastegari
ViT
218
1,213
0
05 Oct 2021
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
277
2,606
0
04 May 2021
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
350
5,811
0
29 Apr 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
307
3,625
0
24 Feb 2021
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
950
20,572
0
17 Apr 2017
1