ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.08898
  4. Cited By
SimA: Simple Softmax-free Attention for Vision Transformers

SimA: Simple Softmax-free Attention for Vision Transformers

17 June 2022
Soroush Abbasi Koohpayegani
Hamed Pirsiavash
ArXivPDFHTML

Papers citing "SimA: Simple Softmax-free Attention for Vision Transformers"

25 / 25 papers shown
Title
A Low-Power Streaming Speech Enhancement Accelerator For Edge Devices
A Low-Power Streaming Speech Enhancement Accelerator For Edge Devices
Ci-Hao Wu
Tian-Sheuan Chang
61
1
0
27 Mar 2025
Rethinking Attention: Polynomial Alternatives to Softmax in Transformers
Rethinking Attention: Polynomial Alternatives to Softmax in Transformers
Hemanth Saratchandran
Jianqiao Zheng
Yiping Ji
Wenbo Zhang
Simon Lucey
31
4
0
24 Oct 2024
Attention is a smoothed cubic spline
Attention is a smoothed cubic spline
Zehua Lai
Lek-Heng Lim
Yucong Liu
34
2
0
19 Aug 2024
PADRe: A Unifying Polynomial Attention Drop-in Replacement for Efficient
  Vision Transformer
PADRe: A Unifying Polynomial Attention Drop-in Replacement for Efficient Vision Transformer
Pierre-David Létourneau
Manish Kumar Singh
Hsin-Pai Cheng
Shizhong Han
Yunxiao Shi
Dalton Jones
M. H. Langston
Hong Cai
Fatih Porikli
39
0
0
16 Jul 2024
Fake News Detection and Manipulation Reasoning via Large Vision-Language
  Models
Fake News Detection and Manipulation Reasoning via Large Vision-Language Models
Ruihan Jin
Ruibo Fu
Zhengqi Wen
Shuai Zhang
Yukun Liu
Jianhua Tao
45
5
0
02 Jul 2024
ToSA: Token Selective Attention for Efficient Vision Transformers
ToSA: Token Selective Attention for Efficient Vision Transformers
Manish Kumar Singh
R. Yasarla
Hong Cai
Mingu Lee
Fatih Porikli
62
0
0
13 Jun 2024
Compute-Efficient Medical Image Classification with Softmax-Free
  Transformers and Sequence Normalization
Compute-Efficient Medical Image Classification with Softmax-Free Transformers and Sequence Normalization
Firas Khader
Omar S. M. El Nahhas
T. Han
Gustav Muller-Franzes
S. Nebelung
Jakob Nikolas Kather
Daniel Truhn
MedIm
32
0
0
03 Jun 2024
SpiralMLP: A Lightweight Vision MLP Architecture
SpiralMLP: A Lightweight Vision MLP Architecture
Haojie Mu
Burhan Ul Tayyab
Nicholas Chua
43
0
0
31 Mar 2024
RAG-Driver: Generalisable Driving Explanations with Retrieval-Augmented
  In-Context Learning in Multi-Modal Large Language Model
RAG-Driver: Generalisable Driving Explanations with Retrieval-Augmented In-Context Learning in Multi-Modal Large Language Model
Jianhao Yuan
Shuyang Sun
Daniel Omeiza
Bo-Lu Zhao
Paul Newman
Lars Kunze
Matthew Gadd
LRM
36
48
0
16 Feb 2024
Beyond the Limits: A Survey of Techniques to Extend the Context Length
  in Large Language Models
Beyond the Limits: A Survey of Techniques to Extend the Context Length in Large Language Models
Xindi Wang
Mahsa Salmani
Parsa Omidi
Xiangyu Ren
Mehdi Rezagholizadeh
A. Eshaghi
LRM
34
35
0
03 Feb 2024
SlowFormer: Universal Adversarial Patch for Attack on Compute and Energy
  Efficiency of Inference Efficient Vision Transformers
SlowFormer: Universal Adversarial Patch for Attack on Compute and Energy Efficiency of Inference Efficient Vision Transformers
K. Navaneet
Soroush Abbasi Koohpayegani
Essam Sleiman
Hamed Pirsiavash
AAML
ViT
13
1
0
04 Oct 2023
Replacing softmax with ReLU in Vision Transformers
Replacing softmax with ReLU in Vision Transformers
Mitchell Wortsman
Jaehoon Lee
Justin Gilmer
Simon Kornblith
ViT
30
33
0
15 Sep 2023
A survey on efficient vision transformers: algorithms, techniques, and
  performance benchmarking
A survey on efficient vision transformers: algorithms, techniques, and performance benchmarking
Lorenzo Papa
Paolo Russo
Irene Amerini
Luping Zhou
30
42
0
05 Sep 2023
When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future Directions
When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future Directions
Weiming Zhuang
Chen Chen
Lingjuan Lyu
Cheng Chen
Yaochu Jin
Lingjuan Lyu
AIFin
AI4CE
99
85
0
27 Jun 2023
A Convolutional Vision Transformer for Semantic Segmentation of
  Side-Scan Sonar Data
A Convolutional Vision Transformer for Semantic Segmentation of Side-Scan Sonar Data
Hayat Rajani
N. Gracias
Rafael García
ViT
27
12
0
24 Feb 2023
Accumulated Trivial Attention Matters in Vision Transformers on Small
  Datasets
Accumulated Trivial Attention Matters in Vision Transformers on Small Datasets
Xiangyu Chen
Qinghao Hu
Kaidong Li
Cuncong Zhong
Guanghui Wang
ViT
38
11
0
22 Oct 2022
DARTFormer: Finding The Best Type Of Attention
DARTFormer: Finding The Best Type Of Attention
Jason Brown
Yiren Zhao
Ilia Shumailov
Robert D. Mullins
22
6
0
02 Oct 2022
On The Computational Complexity of Self-Attention
On The Computational Complexity of Self-Attention
Feyza Duman Keles
Pruthuvi Maheshakya Wijewardena
C. Hegde
70
108
0
11 Sep 2022
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
305
7,443
0
11 Nov 2021
Token Pooling in Vision Transformers
Token Pooling in Vision Transformers
D. Marin
Jen-Hao Rick Chang
Anurag Ranjan
Anish K. Prabhu
Mohammad Rastegari
Oncel Tuzel
ViT
76
66
0
08 Oct 2021
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision
  Transformer
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer
Sachin Mehta
Mohammad Rastegari
ViT
218
1,213
0
05 Oct 2021
MLP-Mixer: An all-MLP Architecture for Vision
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
271
2,603
0
04 May 2021
Emerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
329
5,785
0
29 Apr 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction
  without Convolutions
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
289
3,623
0
24 Feb 2021
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  Applications
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
950
20,567
0
17 Apr 2017
1