Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2006.15595
Cited By
Rethinking Positional Encoding in Language Pre-training
28 June 2020
Guolin Ke
Di He
Tie-Yan Liu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Rethinking Positional Encoding in Language Pre-training"
14 / 64 papers shown
Title
Do Transformers Really Perform Bad for Graph Representation?
Chengxuan Ying
Tianle Cai
Shengjie Luo
Shuxin Zheng
Guolin Ke
Di He
Yanming Shen
Tie-Yan Liu
GNN
33
433
0
09 Jun 2021
A Survey of Transformers
Tianyang Lin
Yuxin Wang
Xiangyang Liu
Xipeng Qiu
ViT
53
1,088
0
08 Jun 2021
Relative Positional Encoding for Transformers with Linear Complexity
Antoine Liutkus
Ondřej Cífka
Shih-Lun Wu
Umut Simsekli
Yi-Hsuan Yang
Gaël Richard
33
44
0
18 May 2021
How could Neural Networks understand Programs?
Dinglan Peng
Shuxin Zheng
Yatao Li
Guolin Ke
Di He
Tie-Yan Liu
NAI
18
61
0
10 May 2021
MuseMorphose: Full-Song and Fine-Grained Piano Music Style Transfer with One Transformer VAE
Shih-Lun Wu
Yi-Hsuan Yang
ViT
25
53
0
10 May 2021
RoFormer: Enhanced Transformer with Rotary Position Embedding
Jianlin Su
Yu Lu
Shengfeng Pan
Ahmed Murtadha
Bo Wen
Yunfeng Liu
38
2,190
0
20 Apr 2021
SAPE: Spatially-Adaptive Progressive Encoding for Neural Optimization
Amir Hertz
Or Perel
Raja Giryes
O. Sorkine-Hornung
Daniel Cohen-Or
28
67
0
19 Apr 2021
Investigating the Limitations of Transformers with Simple Arithmetic Tasks
Rodrigo Nogueira
Zhiying Jiang
Jimmy J. Li
LRM
24
123
0
25 Feb 2021
Revisiting Language Encoding in Learning Multilingual Representations
Shengjie Luo
Kaiyuan Gao
Shuxin Zheng
Guolin Ke
Di He
Liwei Wang
Tie-Yan Liu
34
2
0
16 Feb 2021
Compound Word Transformer: Learning to Compose Full-Song Music over Dynamic Directed Hypergraphs
Wen-Yi Hsiao
Jen-Yu Liu
Yin-Cheng Yeh
Yi-Hsuan Yang
113
180
0
07 Jan 2021
Shortformer: Better Language Modeling using Shorter Inputs
Ofir Press
Noah A. Smith
M. Lewis
230
89
0
31 Dec 2020
Positional Artefacts Propagate Through Masked Language Model Embeddings
Ziyang Luo
Artur Kulmizev
Xiaoxi Mao
29
41
0
09 Nov 2020
ERNIE-Gram: Pre-Training with Explicitly N-Gram Masked Language Modeling for Natural Language Understanding
Dongling Xiao
Yukun Li
Han Zhang
Yu Sun
Hao Tian
Hua Wu
Haifeng Wang
27
38
0
23 Oct 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
299
6,984
0
20 Apr 2018
Previous
1
2