Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2109.12788
Cited By
Multiplicative Position-aware Transformer Models for Language Understanding
27 September 2021
Zhiheng Huang
Davis Liang
Peng Xu
Bing Xiang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Multiplicative Position-aware Transformer Models for Language Understanding"
7 / 7 papers shown
Title
Rethinking Positional Encoding in Language Pre-training
Guolin Ke
Di He
Tie-Yan Liu
40
292
0
28 Jun 2020
MPNet: Masked and Permuted Pre-training for Language Understanding
Kaitao Song
Xu Tan
Tao Qin
Jianfeng Lu
Tie-Yan Liu
94
1,105
0
20 Apr 2020
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Zhenzhong Lan
Mingda Chen
Sebastian Goodman
Kevin Gimpel
Piyush Sharma
Radu Soricut
SSL
AIMat
272
6,420
0
26 Sep 2019
Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Zihang Dai
Zhilin Yang
Yiming Yang
J. Carbonell
Quoc V. Le
Ruslan Salakhutdinov
VLM
142
3,714
0
09 Jan 2019
SQuAD: 100,000+ Questions for Machine Comprehension of Text
Pranav Rajpurkar
Jian Zhang
Konstantin Lopyrev
Percy Liang
RALM
166
8,067
0
16 Jun 2016
A Decomposable Attention Model for Natural Language Inference
Ankur P. Parikh
Oscar Täckström
Dipanjan Das
Jakob Uszkoreit
308
1,369
0
06 Jun 2016
Attention-Based Models for Speech Recognition
J. Chorowski
Dzmitry Bahdanau
Dmitriy Serdyuk
Kyunghyun Cho
Yoshua Bengio
105
2,605
0
24 Jun 2015
1