Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2109.03537
Cited By
On the Transferability of Pre-trained Language Models: A Study from Artificial Datasets
8 September 2021
Cheng-Han Chiang
Hung-yi Lee
SyDa
Re-assign community
ArXiv
PDF
HTML
Papers citing
"On the Transferability of Pre-trained Language Models: A Study from Artificial Datasets"
10 / 10 papers shown
Title
Pre-training with Synthetic Data Helps Offline Reinforcement Learning
Zecheng Wang
Che Wang
Zixuan Dong
Keith Ross
OffRL
36
5
0
01 Oct 2023
Injecting structural hints: Using language models to study inductive biases in language learning
Isabel Papadimitriou
Dan Jurafsky
20
13
0
25 Apr 2023
SPEC: Summary Preference Decomposition for Low-Resource Abstractive Summarization
Yi-Syuan Chen
Yun-Zhu Song
Hong-Han Shuai
33
6
0
24 Mar 2023
Synthetic Pre-Training Tasks for Neural Machine Translation
Zexue He
Graeme W. Blackwood
Yikang Shen
Julian McAuley
Rogerio Feris
29
3
0
19 Dec 2022
On the Effect of Pre-training for Transformer in Different Modality on Offline Reinforcement Learning
S. Takagi
OffRL
18
7
0
17 Nov 2022
Intermediate Fine-Tuning Using Imperfect Synthetic Speech for Improving Electrolaryngeal Speech Recognition
Lester Phillip Violeta
D. Ma
Wen-Chin Huang
T. Toda
37
7
0
02 Nov 2022
Robustness of Demonstration-based Learning Under Limited Data Scenario
Hongxin Zhang
Yanzhe Zhang
Ruiyi Zhang
Diyi Yang
42
13
0
19 Oct 2022
Insights into Pre-training via Simpler Synthetic Tasks
Yuhuai Wu
Felix Li
Percy Liang
AIMat
28
20
0
21 Jun 2022
Pretraining with Artificial Language: Studying Transferable Knowledge in Language Models
Ryokan Ri
Yoshimasa Tsuruoka
32
26
0
19 Mar 2022
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
299
6,996
0
20 Apr 2018
1