Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1912.11637
Cited By
Explicit Sparse Transformer: Concentrated Attention Through Explicit Selection
25 December 2019
Guangxiang Zhao
Junyang Lin
Zhiyuan Zhang
Xuancheng Ren
Qi Su
Xu Sun
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Explicit Sparse Transformer: Concentrated Attention Through Explicit Selection"
22 / 22 papers shown
Title
LapSum -- One Method to Differentiate Them All: Ranking, Sorting and Top-k Selection
Łukasz Struski
Michał B. Bednarczyk
Igor T. Podolak
Jacek Tabor
BDL
59
0
0
08 Mar 2025
Temporal Logic Specification-Conditioned Decision Transformer for Offline Safe Reinforcement Learning
Zijian Guo
Weichao Zhou
Wenchao Li
OffRL
97
2
0
28 Jan 2025
MALT: Multi-scale Action Learning Transformer for Online Action Detection
Zhipeng Yang
Ruoyu Wang
Yang Tan
Liping Xie
OffRL
43
1
0
31 May 2024
A Transformer variant for multi-step forecasting of water level and hydrometeorological sensitivity analysis based on explainable artificial intelligence technology
Mingyu Liu
Nana Bao
Xingting Yan
Chenyang Li
Kai Peng
16
0
0
22 May 2024
Segmentation Guided Sparse Transformer for Under-Display Camera Image Restoration
Jingyun Xue
Tao Wang
Jun Wang
Kaihao Zhang
ViT
48
2
0
09 Mar 2024
LoMA: Lossless Compressed Memory Attention
Yumeng Wang
Zhenyang Xiao
14
3
0
16 Jan 2024
ViTALiTy: Unifying Low-rank and Sparse Approximation for Vision Transformer Acceleration with a Linear Taylor Attention
Jyotikrishna Dass
Shang Wu
Huihong Shi
Chaojian Li
Zhifan Ye
Zhongfeng Wang
Yingyan Lin
17
49
0
09 Nov 2022
Bird-Eye Transformers for Text Generation Models
Lei Sha
Yuhang Song
Yordan Yordanov
Tommaso Salvatori
Thomas Lukasiewicz
30
0
0
08 Oct 2022
Learning in Audio-visual Context: A Review, Analysis, and New Perspective
Yake Wei
Di Hu
Yapeng Tian
Xuelong Li
46
55
0
20 Aug 2022
Accelerating Attention through Gradient-Based Learned Runtime Pruning
Zheng Li
Soroush Ghodrati
Amir Yazdanbakhsh
H. Esmaeilzadeh
Mingu Kang
19
17
0
07 Apr 2022
Dynamic N:M Fine-grained Structured Sparse Attention Mechanism
Zhaodong Chen
Yuying Quan
Zheng Qu
L. Liu
Yufei Ding
Yuan Xie
28
22
0
28 Feb 2022
QuadTree Attention for Vision Transformers
Shitao Tang
Jiahui Zhang
Siyu Zhu
Ping Tan
ViT
166
156
0
08 Jan 2022
Explore Long-Range Context feature for Speaker Verification
Zhuo Li
25
6
0
14 Dec 2021
Skeletal Graph Self-Attention: Embedding a Skeleton Inductive Bias into Sign Language Production
Ben Saunders
Necati Cihan Camgöz
Richard Bowden
SLR
31
17
0
06 Dec 2021
MedGPT: Medical Concept Prediction from Clinical Narratives
Z. Kraljevic
Anthony Shek
D. Bean
R. Bendayan
J. Teo
Richard J. B. Dobson
LM&MA
AI4TS
MedIm
25
39
0
07 Jul 2021
KVT: k-NN Attention for Boosting Vision Transformers
Pichao Wang
Xue Wang
F. Wang
Ming Lin
Shuning Chang
Hao Li
R. L. Jin
ViT
51
105
0
28 May 2021
Multi-document Summarization via Deep Learning Techniques: A Survey
Congbo Ma
W. Zhang
Mingyu Guo
Hu Wang
Quan Z. Sheng
13
125
0
10 Nov 2020
Query-Key Normalization for Transformers
Alex Henry
Prudhvi Raj Dachapally
S. Pawar
Yuxuan Chen
17
75
0
08 Oct 2020
simNet: Stepwise Image-Topic Merging Network for Generating Detailed and Comprehensive Image Captions
Fenglin Liu
Xuancheng Ren
Yuanxin Liu
Houfeng Wang
Xu Sun
95
65
0
27 Aug 2018
Neural Baby Talk
Jiasen Lu
Jianwei Yang
Dhruv Batra
Devi Parikh
VLM
200
434
0
27 Mar 2018
Classical Structured Prediction Losses for Sequence to Sequence Learning
Sergey Edunov
Myle Ott
Michael Auli
David Grangier
MarcÁurelio Ranzato
AIMat
53
185
0
14 Nov 2017
Effective Approaches to Attention-based Neural Machine Translation
Thang Luong
Hieu H. Pham
Christopher D. Manning
218
7,926
0
17 Aug 2015
1