ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.08602
  4. Cited By
LambdaNetworks: Modeling Long-Range Interactions Without Attention

LambdaNetworks: Modeling Long-Range Interactions Without Attention

17 February 2021
Irwan Bello
ArXivPDFHTML

Papers citing "LambdaNetworks: Modeling Long-Range Interactions Without Attention"

50 / 117 papers shown
Title
GaraMoSt: Parallel Multi-Granularity Motion and Structural Modeling for
  Efficient Multi-Frame Interpolation in DSA Images
GaraMoSt: Parallel Multi-Granularity Motion and Structural Modeling for Efficient Multi-Frame Interpolation in DSA Images
Ziyang Xu
Huangxuan Zhao
Wei Liu
Xinyu Wang
69
0
0
18 Dec 2024
LevAttention: Time, Space, and Streaming Efficient Algorithm for Heavy
  Attentions
LevAttention: Time, Space, and Streaming Efficient Algorithm for Heavy Attentions
R. Kannan
Chiranjib Bhattacharyya
Praneeth Kacham
David P. Woodruff
25
1
0
07 Oct 2024
MoSt-DSA: Modeling Motion and Structural Interactions for Direct
  Multi-Frame Interpolation in DSA Images
MoSt-DSA: Modeling Motion and Structural Interactions for Direct Multi-Frame Interpolation in DSA Images
Ziyang Xu
Huangxuan Zhao
Ziwei Cui
Wenyu Liu
Chuansheng Zheng
Xinggang Wang
34
1
0
09 Jul 2024
Runtime Freezing: Dynamic Class Loss for Multi-Organ 3D Segmentation
Runtime Freezing: Dynamic Class Loss for Multi-Organ 3D Segmentation
James Willoughby
Irina Voiculescu
SSeg
AI4CE
37
0
0
12 Jun 2024
Convolutional Neural Networks and Vision Transformers for Fashion MNIST
  Classification: A Literature Review
Convolutional Neural Networks and Vision Transformers for Fashion MNIST Classification: A Literature Review
Sonia Bbouzidi
Ghazala Hcini
Imen Jdey
Fadoua Drira
29
4
0
05 Jun 2024
DiJiang: Efficient Large Language Models through Compact Kernelization
DiJiang: Efficient Large Language Models through Compact Kernelization
Hanting Chen
Zhicheng Liu
Xutao Wang
Yuchuan Tian
Yunhe Wang
VLM
29
5
0
29 Mar 2024
PlainMamba: Improving Non-Hierarchical Mamba in Visual Recognition
PlainMamba: Improving Non-Hierarchical Mamba in Visual Recognition
Chenhongyi Yang
Zehui Chen
Miguel Espinosa
Linus Ericsson
Zhenyu Wang
Jiaming Liu
Elliot J. Crowley
Mamba
36
88
0
26 Mar 2024
Smartphone region-wise image indoor localization using deep learning for
  indoor tourist attraction
Smartphone region-wise image indoor localization using deep learning for indoor tourist attraction
G. Higa
Rodrigo Stuqui Monzani
Jorge Fernando da Silva Cecatto
Maria Fernanda Balestieri Mariano de Souza
V. A. Weber
H. Pistori
E. Matsubara
HAI
36
2
0
12 Mar 2024
Exploring the Synergies of Hybrid CNNs and ViTs Architectures for
  Computer Vision: A survey
Exploring the Synergies of Hybrid CNNs and ViTs Architectures for Computer Vision: A survey
Haruna Yunusa
Shiyin Qin
Abdulrahman Hamman Adama Chukkol
Abdulganiyu Abdu Yusuf
Isah Bello
A. Lawan
ViT
37
13
0
05 Feb 2024
BA-SAM: Scalable Bias-Mode Attention Mask for Segment Anything Model
BA-SAM: Scalable Bias-Mode Attention Mask for Segment Anything Model
Yiran Song
Qianyu Zhou
Hefei Ling
Deng-Ping Fan
Xuequan Lu
Lizhuang Ma
VLM
35
14
0
04 Jan 2024
One Self-Configurable Model to Solve Many Abstract Visual Reasoning
  Problems
One Self-Configurable Model to Solve Many Abstract Visual Reasoning Problems
Mikolaj Malkiñski
Jacek Mańdziuk
32
3
0
15 Dec 2023
ChAda-ViT : Channel Adaptive Attention for Joint Representation Learning
  of Heterogeneous Microscopy Images
ChAda-ViT : Channel Adaptive Attention for Joint Representation Learning of Heterogeneous Microscopy Images
Nicolas Bourriez
Ihab Bendidi
Ethan O. Cohen
Gabriel Watkinson
Maxime Sanchez
Guillaume Bollot
Auguste Genovesio
MedIm
19
8
0
26 Nov 2023
LATIS: Lambda Abstraction-based Thermal Image Super-resolution
LATIS: Lambda Abstraction-based Thermal Image Super-resolution
Gargi Panda
Soumitra Kundu
Saumik Bhattacharya
Aurobinda Routray
28
0
0
18 Nov 2023
Blockwise Parallel Transformer for Large Context Models
Blockwise Parallel Transformer for Large Context Models
Hao Liu
Pieter Abbeel
41
11
0
30 May 2023
Collaborative Blind Image Deblurring
Collaborative Blind Image Deblurring
Thomas Eboli
Jean-Michel Morel
Gabriele Facciolo
11
1
0
25 May 2023
Understanding Gaussian Attention Bias of Vision Transformers Using
  Effective Receptive Fields
Understanding Gaussian Attention Bias of Vision Transformers Using Effective Receptive Fields
Bum Jun Kim
Hyeyeon Choi
Hyeonah Jang
Sang Woo Kim
ViT
20
3
0
08 May 2023
VOLTA: an Environment-Aware Contrastive Cell Representation Learning for
  Histopathology
VOLTA: an Environment-Aware Contrastive Cell Representation Learning for Histopathology
Ramin Nakhli
Allen W. Zhang
Katherine Rich
Amirali Darbandsari
Elahe Shenasa
...
K. Milne
J. McAlpine
B. Nelson
C. Gilks
A. Bashashati
14
5
0
08 Mar 2023
Towards more precise automatic analysis: a comprehensive survey of deep
  learning-based multi-organ segmentation
Towards more precise automatic analysis: a comprehensive survey of deep learning-based multi-organ segmentation
Xiaoyu Liu
Linhao Qu
Ziyue Xie
Jiayue Zhao
Yonghong Shi
Zhijian Song
29
7
0
01 Mar 2023
Learning a Fourier Transform for Linear Relative Positional Encodings in
  Transformers
Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers
K. Choromanski
Shanda Li
Valerii Likhosherstov
Kumar Avinava Dubey
Shengjie Luo
Di He
Yiming Yang
Tamás Sarlós
Thomas Weingarten
Adrian Weller
37
8
0
03 Feb 2023
Convolution-enhanced Evolving Attention Networks
Convolution-enhanced Evolving Attention Networks
Yujing Wang
Yaming Yang
Zhuowan Li
Jiangang Bai
Mingliang Zhang
Xiangtai Li
Jiahao Yu
Ce Zhang
Gao Huang
Yu Tong
ViT
27
6
0
16 Dec 2022
Lightweight Structure-Aware Attention for Visual Understanding
Lightweight Structure-Aware Attention for Visual Understanding
Heeseung Kwon
F. M. Castro
M. Marín-Jiménez
N. Guil
Alahari Karteek
28
2
0
29 Nov 2022
ViT-LSLA: Vision Transformer with Light Self-Limited-Attention
ViT-LSLA: Vision Transformer with Light Self-Limited-Attention
Zhenzhe Hechen
Wei Huang
Yixin Zhao
ViT
30
6
0
31 Oct 2022
Similarity of Neural Architectures using Adversarial Attack
  Transferability
Similarity of Neural Architectures using Adversarial Attack Transferability
Jaehui Hwang
Dongyoon Han
Byeongho Heo
Song Park
Sanghyuk Chun
Jong-Seok Lee
AAML
29
1
0
20 Oct 2022
Vision Transformers provably learn spatial structure
Vision Transformers provably learn spatial structure
Samy Jelassi
Michael E. Sander
Yuan-Fang Li
ViT
MLT
34
74
0
13 Oct 2022
Compute-Efficient Deep Learning: Algorithmic Trends and Opportunities
Compute-Efficient Deep Learning: Algorithmic Trends and Opportunities
Brian Bartoldson
B. Kailkhura
Davis W. Blalock
31
47
0
13 Oct 2022
MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision
  Models
MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision Models
Chenglin Yang
Siyuan Qiao
Qihang Yu
Xiaoding Yuan
Yukun Zhu
Alan Yuille
Hartwig Adam
Liang-Chieh Chen
ViT
MoE
33
58
0
04 Oct 2022
SplitMixer: Fat Trimmed From MLP-like Models
SplitMixer: Fat Trimmed From MLP-like Models
Ali Borji
Sikun Lin
23
3
0
21 Jul 2022
Transformer based Models for Unsupervised Anomaly Segmentation in Brain
  MR Images
Transformer based Models for Unsupervised Anomaly Segmentation in Brain MR Images
Ahmed Ghorbel
Ahmed Aldahdooh
Shadi Albarqouni
Neuherberg
ViT
MedIm
24
4
0
05 Jul 2022
Softmax-free Linear Transformers
Softmax-free Linear Transformers
Jiachen Lu
Junge Zhang
Xiatian Zhu
Jianfeng Feng
Tao Xiang
Li Zhang
ViT
11
7
0
05 Jul 2022
Rethinking Query-Key Pairwise Interactions in Vision Transformers
Rethinking Query-Key Pairwise Interactions in Vision Transformers
Cheng-rong Li
Yangxin Liu
34
0
0
01 Jul 2022
EATFormer: Improving Vision Transformer Inspired by Evolutionary
  Algorithm
EATFormer: Improving Vision Transformer Inspired by Evolutionary Algorithm
Jiangning Zhang
Xiangtai Li
Yabiao Wang
Chengjie Wang
Yibo Yang
Yong Liu
Dacheng Tao
ViT
34
32
0
19 Jun 2022
AGConv: Adaptive Graph Convolution on 3D Point Clouds
AGConv: Adaptive Graph Convolution on 3D Point Clouds
Mingqiang Wei
Zeyong Wei
Hao Zhou
Fei-Jiang Hu
Huajian Si
...
Jingbo Qiu
Xu Yan
Yan Guo
Jun Wang
J. Qin
3DPC
21
38
0
09 Jun 2022
Separable Self-attention for Mobile Vision Transformers
Separable Self-attention for Mobile Vision Transformers
Sachin Mehta
Mohammad Rastegari
ViT
MQ
20
251
0
06 Jun 2022
FlashAttention: Fast and Memory-Efficient Exact Attention with
  IO-Awareness
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Tri Dao
Daniel Y. Fu
Stefano Ermon
Atri Rudra
Christopher Ré
VLM
63
2,024
0
27 May 2022
HCFormer: Unified Image Segmentation with Hierarchical Clustering
HCFormer: Unified Image Segmentation with Hierarchical Clustering
Teppei Suzuki
24
0
0
20 May 2022
Investigating Neural Architectures by Synthetic Dataset Design
Investigating Neural Architectures by Synthetic Dataset Design
Adrien Courtois
Jean-Michel Morel
Pablo Arias
19
4
0
23 Apr 2022
DaViT: Dual Attention Vision Transformers
DaViT: Dual Attention Vision Transformers
Mingyu Ding
Bin Xiao
Noel Codella
Ping Luo
Jingdong Wang
Lu Yuan
ViT
42
240
0
07 Apr 2022
Context-aware Visual Tracking with Joint Meta-updating
Context-aware Visual Tracking with Joint Meta-updating
Qiuhong Shen
Xin Li
Fanyang Meng
Yongsheng Liang
23
2
0
04 Apr 2022
HyperMixer: An MLP-based Low Cost Alternative to Transformers
HyperMixer: An MLP-based Low Cost Alternative to Transformers
Florian Mai
Arnaud Pannatier
Fabio Fehr
Haolin Chen
François Marelli
F. Fleuret
James Henderson
27
11
0
07 Mar 2022
Visual Attention Network
Visual Attention Network
Meng-Hao Guo
Chengrou Lu
Zheng-Ning Liu
Ming-Ming Cheng
Shiyong Hu
ViT
VLM
24
637
0
20 Feb 2022
(2.5+1)D Spatio-Temporal Scene Graphs for Video Question Answering
(2.5+1)D Spatio-Temporal Scene Graphs for Video Question Answering
A. Cherian
Chiori Hori
Tim K. Marks
Jonathan Le Roux
21
35
0
18 Feb 2022
How Do Vision Transformers Work?
How Do Vision Transformers Work?
Namuk Park
Songkuk Kim
ViT
35
465
0
14 Feb 2022
Patches Are All You Need?
Patches Are All You Need?
Asher Trockman
J. Zico Kolter
ViT
225
402
0
24 Jan 2022
3D Medical Point Transformer: Introducing Convolution to Attention
  Networks for Medical Point Cloud Analysis
3D Medical Point Transformer: Introducing Convolution to Attention Networks for Medical Point Cloud Analysis
Jianhui Yu
Chaoyi Zhang
Heng Wang
Dingxin Zhang
Yang Song
Tiange Xiang
Dongnan Liu
Weidong (Tom) Cai
ViT
MedIm
21
32
0
09 Dec 2021
Fast Point Transformer
Fast Point Transformer
Chunghyun Park
Yoonwoo Jeong
Minsu Cho
Jaesik Park
3DPC
ViT
30
168
0
09 Dec 2021
SWAT: Spatial Structure Within and Among Tokens
SWAT: Spatial Structure Within and Among Tokens
Kumara Kahatapitiya
Michael S. Ryoo
25
6
0
26 Nov 2021
PointMixer: MLP-Mixer for Point Cloud Understanding
PointMixer: MLP-Mixer for Point Cloud Understanding
Jaesung Choe
Chunghyun Park
François Rameau
Jaesik Park
In So Kweon
3DPC
39
98
0
22 Nov 2021
Relational Self-Attention: What's Missing in Attention for Video
  Understanding
Relational Self-Attention: What's Missing in Attention for Video Understanding
Manjin Kim
Heeseung Kwon
Chunyu Wang
Suha Kwak
Minsu Cho
ViT
27
28
0
02 Nov 2021
SOFT: Softmax-free Transformer with Linear Complexity
SOFT: Softmax-free Transformer with Linear Complexity
Jiachen Lu
Jinghan Yao
Junge Zhang
Martin Danelljan
Hang Xu
Weiguo Gao
Chunjing Xu
Thomas B. Schon
Li Zhang
18
161
0
22 Oct 2021
Revisiting 3D ResNets for Video Recognition
Revisiting 3D ResNets for Video Recognition
Xianzhi Du
Yeqing Li
Huayu Chen
Rui Qian
Jing Li
Irwan Bello
56
17
0
03 Sep 2021
123
Next