ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.05546
  4. Cited By
Progressive Multi-Granularity Training for Non-Autoregressive
  Translation

Progressive Multi-Granularity Training for Non-Autoregressive Translation

10 June 2021
Liang Ding
Longyue Wang
Xuebo Liu
Derek F. Wong
Dacheng Tao
Zhaopeng Tu
    AI4TS
ArXivPDFHTML

Papers citing "Progressive Multi-Granularity Training for Non-Autoregressive Translation"

13 / 13 papers shown
Title
What Have We Achieved on Non-autoregressive Translation?
What Have We Achieved on Non-autoregressive Translation?
Yafu Li
Huajian Zhang
Jianhao Yan
Yongjing Yin
Yue Zhang
33
1
0
21 May 2024
Improving Complex Reasoning over Knowledge Graph with Logic-Aware Curriculum Tuning
Improving Complex Reasoning over Knowledge Graph with Logic-Aware Curriculum Tuning
Tianle Xia
Liang Ding
Guojia Wan
Yibing Zhan
Bo Du
Dacheng Tao
LRM
36
0
0
02 May 2024
Sentence-Level or Token-Level? A Comprehensive Study on Knowledge
  Distillation
Sentence-Level or Token-Level? A Comprehensive Study on Knowledge Distillation
Jingxuan Wei
Linzhuang Sun
Yichong Leng
Xu Tan
Bihui Yu
Ruifeng Guo
51
3
0
23 Apr 2024
Revisiting Non-Autoregressive Translation at Scale
Revisiting Non-Autoregressive Translation at Scale
Zhihao Wang
Longyue Wang
Jinsong Su
Junfeng Yao
Zhaopeng Tu
30
3
0
25 May 2023
Revisiting Token Dropping Strategy in Efficient BERT Pretraining
Revisiting Token Dropping Strategy in Efficient BERT Pretraining
Qihuang Zhong
Liang Ding
Juhua Liu
Xuebo Liu
Min Zhang
Bo Du
Dacheng Tao
VLM
34
9
0
24 May 2023
On Efficient Training of Large-Scale Deep Learning Models: A Literature
  Review
On Efficient Training of Large-Scale Deep Learning Models: A Literature Review
Li Shen
Yan Sun
Zhiyuan Yu
Liang Ding
Xinmei Tian
Dacheng Tao
VLM
30
41
0
07 Apr 2023
Multi-Granularity Optimization for Non-Autoregressive Translation
Multi-Granularity Optimization for Non-Autoregressive Translation
Yafu Li
Leyang Cui
Yongjing Yin
Yue Zhang
32
7
0
20 Oct 2022
E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language
  Understanding and Generation
E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language Understanding and Generation
Qihuang Zhong
Liang Ding
Juhua Liu
Bo Du
Dacheng Tao
49
27
0
30 May 2022
Where Does the Performance Improvement Come From? -- A Reproducibility
  Concern about Image-Text Retrieval
Where Does the Performance Improvement Come From? -- A Reproducibility Concern about Image-Text Retrieval
Jun Rao
Fei-Yue Wang
Liang Ding
Shuhan Qi
Yibing Zhan
Weifeng Liu
Dacheng Tao
OOD
42
28
0
08 Mar 2022
Improving Neural Machine Translation by Denoising Training
Improving Neural Machine Translation by Denoising Training
Liang Ding
Keqin Peng
Dacheng Tao
VLM
AI4CE
41
6
0
19 Jan 2022
Improving Neural Machine Translation by Bidirectional Training
Improving Neural Machine Translation by Bidirectional Training
Liang Ding
Di Wu
Dacheng Tao
26
29
0
16 Sep 2021
The USYD-JD Speech Translation System for IWSLT 2021
The USYD-JD Speech Translation System for IWSLT 2021
Liang Ding
Di Wu
Dacheng Tao
29
16
0
24 Jul 2021
Understanding and Improving Lexical Choice in Non-Autoregressive
  Translation
Understanding and Improving Lexical Choice in Non-Autoregressive Translation
Liang Ding
Longyue Wang
Xuebo Liu
Derek F. Wong
Dacheng Tao
Zhaopeng Tu
107
77
0
29 Dec 2020
1