Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2209.03316
Cited By
On the Complementarity between Pre-Training and Random-Initialization for Resource-Rich Machine Translation
7 September 2022
Changtong Zan
Liang Ding
Li Shen
Yu Cao
Weifeng Liu
Dacheng Tao
Re-assign community
ArXiv
PDF
HTML
Papers citing
"On the Complementarity between Pre-Training and Random-Initialization for Resource-Rich Machine Translation"
16 / 16 papers shown
Title
Self-Evolution Knowledge Distillation for LLM-based Machine Translation
Yuncheng Song
Liang Ding
Changtong Zan
Shujian Huang
78
0
0
19 Dec 2024
Why pre-training is beneficial for downstream classification tasks?
Xin Jiang
Xu Cheng
Zechao Li
32
0
0
11 Oct 2024
The Impact of Syntactic and Semantic Proximity on Machine Translation with Back-Translation
Nicolas Guerin
Shane Steinert-Threlkeld
Emmanuel Chemla
32
1
0
26 Mar 2024
Retrieval-based Knowledge Transfer: An Effective Approach for Extreme Large Language Model Compression
Jiduan Liu
Jiahao Liu
Qifan Wang
Jingang Wang
Xunliang Cai
Dongyan Zhao
R. Wang
Rui Yan
24
4
0
24 Oct 2023
Diversifying the Mixture-of-Experts Representation for Language Models with Orthogonal Optimizer
Boan Liu
Liang Ding
Li Shen
Keqin Peng
Yu Cao
Dazhao Cheng
Dacheng Tao
MoE
36
7
0
15 Oct 2023
Unlikelihood Tuning on Negative Samples Amazingly Improves Zero-Shot Translation
Junjie Yang
Liang Ding
Li Shen
Matthieu Labeau
Yibing Zhan
Weifeng Liu
Dacheng Tao
VLM
33
4
0
28 Sep 2023
MerA: Merging Pretrained Adapters For Few-Shot Learning
Shwai He
Run-Ze Fan
Liang Ding
Li Shen
Dinesh Manocha
Dacheng Tao
MoMe
42
11
0
30 Aug 2023
Self-Evolution Learning for Discriminative Language Model Pretraining
Qihuang Zhong
Liang Ding
Juhua Liu
Bo Du
Dacheng Tao
34
12
0
24 May 2023
Prompt-Learning for Cross-Lingual Relation Extraction
Chiaming Hsu
Changtong Zan
Liang Ding
Longyue Wang
Xiaoting Wang
Weifeng Liu
Fu Lin
Wenbin Hu
LRM
26
5
0
20 Apr 2023
OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge Collaborative AutoML System
Chao Xue
Wei Liu
Shunxing Xie
Zhenfang Wang
Jiaxing Li
...
Shi-Yong Chen
Yibing Zhan
Jing Zhang
Chaoyue Wang
Dacheng Tao
43
2
0
01 Mar 2023
Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE
Qihuang Zhong
Liang Ding
Keqin Peng
Juhua Liu
Bo Du
Li Shen
Yibing Zhan
Dacheng Tao
VLM
42
13
0
18 Feb 2023
Improving Sharpness-Aware Minimization with Fisher Mask for Better Generalization on Language Models
Qihuang Zhong
Liang Ding
Li Shen
Peng Mi
Juhua Liu
Bo Du
Dacheng Tao
AAML
30
50
0
11 Oct 2022
SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters
Shwai He
Liang Ding
Daize Dong
Miao Zhang
Dacheng Tao
MoE
26
87
0
09 Oct 2022
Vega-MT: The JD Explore Academy Translation System for WMT22
Changtong Zan
Keqin Peng
Liang Ding
Baopu Qiu
Boan Liu
...
Zhenghang Zhang
Chuang Liu
Weifeng Liu
Yibing Zhan
Dacheng Tao
VLM
24
15
0
20 Sep 2022
Bridging Cross-Lingual Gaps During Leveraging the Multilingual Sequence-to-Sequence Pretraining for Text Generation and Understanding
Changtong Zan
Liang Ding
Li Shen
Yu Cao
Weifeng Liu
Dacheng Tao
LRM
37
8
0
16 Apr 2022
A Contrastive Cross-Channel Data Augmentation Framework for Aspect-based Sentiment Analysis
Bing Wang
Liang Ding
Qihuang Zhong
Ximing Li
Dacheng Tao
29
32
0
16 Apr 2022
1