Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2306.01610
Cited By
Centered Self-Attention Layers
2 June 2023
Ameen Ali
Tomer Galanti
Lior Wolf
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Centered Self-Attention Layers"
9 / 9 papers shown
Title
The Hidden Attention of Mamba Models
Ameen Ali
Itamar Zimerman
Lior Wolf
Mamba
39
58
0
03 Mar 2024
Setting the Record Straight on Transformer Oversmoothing
G. Dovonon
M. Bronstein
Matt J. Kusner
28
5
0
09 Jan 2024
Graph Convolutions Enrich the Self-Attention in Transformers!
Jeongwhan Choi
Hyowon Wi
Jayoung Kim
Yehjin Shin
Kookjin Lee
Nathaniel Trask
Noseong Park
32
4
0
07 Dec 2023
Simplifying Transformer Blocks
Bobby He
Thomas Hofmann
27
30
0
03 Nov 2023
ResiDual: Transformer with Dual Residual Connections
Shufang Xie
Huishuai Zhang
Junliang Guo
Xu Tan
Jiang Bian
Hany Awadalla
Arul Menezes
Tao Qin
Rui Yan
51
18
0
28 Apr 2023
Weakly Supervised Semantic Segmentation by Pixel-to-Prototype Contrast
Ye Du
Zehua Fu
Qingjie Liu
Yunhong Wang
93
129
0
14 Oct 2021
Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation
Jungbeom Lee
Jooyoung Choi
J. Mok
Sungroh Yoon
SSeg
220
134
0
13 Oct 2021
ImageNet-21K Pretraining for the Masses
T. Ridnik
Emanuel Ben-Baruch
Asaf Noy
Lihi Zelnik-Manor
SSeg
VLM
CLIP
181
689
0
22 Apr 2021
Multi-scale Attributed Node Embedding
Benedek Rozemberczki
Carl Allen
Rik Sarkar
GNN
148
837
0
28 Sep 2019
1