Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2108.02401
Cited By
WeChat Neural Machine Translation Systems for WMT21
5 August 2021
Xianfeng Zeng
Yanjun Liu
Ernan Li
Qiu Ran
Fandong Meng
Peng Li
Jinan Xu
Jie Zhou
Re-assign community
ArXiv
PDF
HTML
Papers citing
"WeChat Neural Machine Translation Systems for WMT21"
22 / 22 papers shown
Title
Scheduled Sampling Based on Decoding Steps for Neural Machine Translation
Yijin Liu
Fandong Meng
Jinan Xu
Jinan Xu
Jie Zhou
35
16
0
30 Aug 2021
Confidence-Aware Scheduled Sampling for Neural Machine Translation
Yijin Liu
Fandong Meng
Jinan Xu
Jinan Xu
Jie Zhou
48
14
0
22 Jul 2021
Selective Knowledge Distillation for Neural Machine Translation
Fusheng Wang
Jianhao Yan
Fandong Meng
Jie Zhou
31
58
0
27 May 2021
Multi-Unit Transformers for Neural Machine Translation
Jianhao Yan
Fandong Meng
Jie Zhou
27
17
0
21 Oct 2020
WeChat Neural Machine Translation Systems for WMT20
Fandong Meng
Jianhao Yan
Yijin Liu
Yuan Gao
Xia Zeng
...
Peng Li
Ming Chen
Jie Zhou
Sifan Liu
Hao Zhou
53
21
0
01 Oct 2020
Very Deep Transformers for Neural Machine Translation
Xiaodong Liu
Kevin Duh
Liyuan Liu
Jianfeng Gao
40
102
0
18 Aug 2020
On Exposure Bias, Hallucination and Domain Shift in Neural Machine Translation
Chaojun Wang
Rico Sennrich
39
157
0
07 May 2020
On the Inference Calibration of Neural Machine Translation
Shuo Wang
Zhaopeng Tu
Shuming Shi
Yang Liu
41
80
0
03 May 2020
Talking-Heads Attention
Noam M. Shazeer
Zhenzhong Lan
Youlong Cheng
Nan Ding
L. Hou
110
80
0
05 Mar 2020
On Layer Normalization in the Transformer Architecture
Ruibin Xiong
Yunchang Yang
Di He
Kai Zheng
Shuxin Zheng
Chen Xing
Huishuai Zhang
Yanyan Lan
Liwei Wang
Tie-Yan Liu
AI4CE
80
973
0
12 Feb 2020
Scheduled Sampling for Transformers
Tsvetomila Mihaylova
André F. T. Martins
40
64
0
18 Jun 2019
Tagged Back-Translation
Isaac Caswell
Ciprian Chelba
David Grangier
84
218
0
15 Jun 2019
Parallel Scheduled Sampling
Daniel Duckworth
Arvind Neelakantan
Ben Goodrich
Lukasz Kaiser
Samy Bengio
47
23
0
11 Jun 2019
Bridging the Gap between Training and Inference for Neural Machine Translation
Wen Zhang
Yang Feng
Fandong Meng
Di You
Qun Liu
AIMat
46
240
0
06 Jun 2019
DTMT: A Novel Deep Transition Architecture for Neural Machine Translation
Fandong Meng
Jinchao Zhang
AI4CE
49
44
0
19 Dec 2018
Accelerating Neural Transformer via an Average Attention Network
Biao Zhang
Deyi Xiong
Jinsong Su
45
120
0
02 May 2018
Texygen: A Benchmarking Platform for Text Generation Models
Yaoming Zhu
Sidi Lu
Lei Zheng
Jiaxian Guo
Weinan Zhang
Jun Wang
Yong Yu
66
671
0
06 Feb 2018
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
422
129,831
0
12 Jun 2017
Sequence-Level Knowledge Distillation
Yoon Kim
Alexander M. Rush
81
1,109
0
25 Jun 2016
Sequence Level Training with Recurrent Neural Networks
MarcÁurelio Ranzato
S. Chopra
Michael Auli
Wojciech Zaremba
68
1,610
0
20 Nov 2015
Improving Neural Machine Translation Models with Monolingual Data
Rico Sennrich
Barry Haddow
Alexandra Birch
188
2,705
0
20 Nov 2015
Neural Machine Translation of Rare Words with Subword Units
Rico Sennrich
Barry Haddow
Alexandra Birch
151
7,683
0
31 Aug 2015
1