Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2002.02000
Cited By
Aligning the Pretraining and Finetuning Objectives of Language Models
5 February 2020
Nuo Wang Pierse
Jing Lu
AI4CE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Aligning the Pretraining and Finetuning Objectives of Language Models"
7 / 7 papers shown
Title
Unsupervised Cross-lingual Representation Learning at Scale
Alexis Conneau
Kartikay Khandelwal
Naman Goyal
Vishrav Chaudhary
Guillaume Wenzek
Francisco Guzmán
Edouard Grave
Myle Ott
Luke Zettlemoyer
Veselin Stoyanov
108
6,454
0
05 Nov 2019
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
Zhenzhong Lan
Mingda Chen
Sebastian Goodman
Kevin Gimpel
Piyush Sharma
Radu Soricut
SSL
AIMat
204
6,420
0
26 Sep 2019
TinyBERT: Distilling BERT for Natural Language Understanding
Xiaoqi Jiao
Yichun Yin
Lifeng Shang
Xin Jiang
Xiao Chen
Linlin Li
F. Wang
Qun Liu
VLM
30
1,838
0
23 Sep 2019
Unified Language Model Pre-training for Natural Language Understanding and Generation
Li Dong
Nan Yang
Wenhui Wang
Furu Wei
Xiaodong Liu
Yu Wang
Jianfeng Gao
M. Zhou
H. Hon
ELM
AI4CE
92
1,553
0
08 May 2019
Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Zihang Dai
Zhilin Yang
Yiming Yang
J. Carbonell
Quoc V. Le
Ruslan Salakhutdinov
VLM
75
3,707
0
09 Jan 2019
Marian: Fast Neural Machine Translation in C++
Marcin Junczys-Dowmunt
Roman Grundkiewicz
Tomasz Dwojak
Hieu T. Hoang
Kenneth Heafield
...
Ulrich Germann
Alham Fikri Aji
Nikolay Bogoychev
André F. T. Martins
Alexandra Birch
52
711
0
01 Apr 2018
SQuAD: 100,000+ Questions for Machine Comprehension of Text
Pranav Rajpurkar
Jian Zhang
Konstantin Lopyrev
Percy Liang
RALM
77
8,067
0
16 Jun 2016
1