Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.06147
Cited By
Implicit Kernel Attention
11 June 2020
Kyungwoo Song
Yohan Jung
Dongjun Kim
Il-Chul Moon
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Implicit Kernel Attention"
7 / 7 papers shown
Title
A physics-informed transformer neural operator for learning generalized solutions of initial boundary value problems
Sumanth Kumar Boya
Deepak Subramani
AI4CE
99
0
0
12 Dec 2024
Uniform Memory Retrieval with Larger Capacity for Modern Hopfield Models
Dennis Wu
Jerry Yao-Chieh Hu
Teng-Yun Hsiao
Han Liu
40
28
0
04 Apr 2024
Momentum Transformer: Closing the Performance Gap Between Self-attention and Its Linearization
T. Nguyen
Richard G. Baraniuk
Robert M. Kirby
Stanley J. Osher
Bao Wang
32
9
0
01 Aug 2022
Is Attention Better Than Matrix Decomposition?
Zhengyang Geng
Meng-Hao Guo
Hongxu Chen
Xia Li
Ke Wei
Zhouchen Lin
62
137
0
09 Sep 2021
Choose a Transformer: Fourier or Galerkin
Shuhao Cao
42
225
0
31 May 2021
Classical Structured Prediction Losses for Sequence to Sequence Learning
Sergey Edunov
Myle Ott
Michael Auli
David Grangier
MarcÁurelio Ranzato
AIMat
56
185
0
14 Nov 2017
Convolutional Neural Networks for Sentence Classification
Yoon Kim
AILaw
VLM
255
13,368
0
25 Aug 2014
1