ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2204.06644
  4. Cited By
METRO: Efficient Denoising Pretraining of Large Scale Autoencoding
  Language Models with Model Generated Signals

METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals

13 April 2022
Payal Bajaj
Chenyan Xiong
Guolin Ke
Xiaodong Liu
Di He
Saurabh Tiwary
Tie-Yan Liu
Paul N. Bennett
Xia Song
Jianfeng Gao
ArXivPDFHTML

Papers citing "METRO: Efficient Denoising Pretraining of Large Scale Autoencoding Language Models with Model Generated Signals"

27 / 27 papers shown
Title
Self-Rationalization in the Wild: A Large Scale Out-of-Distribution Evaluation on NLI-related tasks
Self-Rationalization in the Wild: A Large Scale Out-of-Distribution Evaluation on NLI-related tasks
Jing Yang
Max Glockner
Anderson de Rezende Rocha
Iryna Gurevych
LRM
73
1
0
07 Feb 2025
MATES: Model-Aware Data Selection for Efficient Pretraining with Data
  Influence Models
MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models
Zichun Yu
Spandan Das
Chenyan Xiong
39
28
0
10 Jun 2024
ReALM: Reference Resolution As Language Modeling
ReALM: Reference Resolution As Language Modeling
Joel Ruben Antony Moniz
Soundarya Krishnan
Melis Ozyildirim
Prathamesh Saraf
Halim Cagri Ates
Yuan-kang Zhang
Hong-ye Yu
Nidhi Rajshree
39
6
0
29 Mar 2024
Vygotsky Distance: Measure for Benchmark Task Similarity
Vygotsky Distance: Measure for Benchmark Task Similarity
Maxim K. Surkov
Ivan P. Yamshchikov
29
0
0
22 Feb 2024
SpacTor-T5: Pre-training T5 Models with Span Corruption and Replaced
  Token Detection
SpacTor-T5: Pre-training T5 Models with Span Corruption and Replaced Token Detection
Ke Ye
Heinrich Jiang
Afshin Rostamizadeh
Ayan Chakrabarti
Giulia DeSalvo
Jean-François Kagy
Lazaros Karydas
Gui Citovsky
Sanjiv Kumar
28
0
0
24 Jan 2024
Labels Need Prompts Too: Mask Matching for Natural Language
  Understanding Tasks
Labels Need Prompts Too: Mask Matching for Natural Language Understanding Tasks
Bo Li
Wei Ye
Quan-ding Wang
Wen Zhao
Shikun Zhang
VLM
35
1
0
14 Dec 2023
Lil-Bevo: Explorations of Strategies for Training Language Models in
  More Humanlike Ways
Lil-Bevo: Explorations of Strategies for Training Language Models in More Humanlike Ways
Venkata S Govindarajan
Juan Diego Rodriguez
Kaj Bostrom
Kyle Mahowald
15
1
0
26 Oct 2023
Fast-ELECTRA for Efficient Pre-training
Fast-ELECTRA for Efficient Pre-training
Chengyu Dong
Liyuan Liu
Hao Cheng
Jingbo Shang
Jianfeng Gao
Xiaodong Liu
44
2
0
11 Oct 2023
Sparse Backpropagation for MoE Training
Sparse Backpropagation for MoE Training
Liyuan Liu
Jianfeng Gao
Weizhu Chen
MoE
19
9
0
01 Oct 2023
Foundation Metrics for Evaluating Effectiveness of Healthcare
  Conversations Powered by Generative AI
Foundation Metrics for Evaluating Effectiveness of Healthcare Conversations Powered by Generative AI
Mahyar Abbasian
Elahe Khatibi
Iman Azimi
David Oniani
Zahra Shakeri Hossein Abad
...
Bryant Lin
Olivier Gevaert
Li-Jia Li
Ramesh C. Jain
Amir M. Rahmani
LM&MA
ELM
AI4MH
37
66
0
21 Sep 2023
Sparks of Large Audio Models: A Survey and Outlook
Sparks of Large Audio Models: A Survey and Outlook
S. Latif
Moazzam Shoukat
Fahad Shamshad
Muhammad Usama
Yi Ren
...
Wenwu Wang
Xulong Zhang
Roberto Togneri
Erik Cambria
Björn W. Schuller
LM&MA
AuLLM
31
38
0
24 Aug 2023
AraMUS: Pushing the Limits of Data and Model Scale for Arabic Natural
  Language Processing
AraMUS: Pushing the Limits of Data and Model Scale for Arabic Natural Language Processing
Asaad Alghamdi
Xinyu Duan
Wei Jiang
Zhenhai Wang
Yimeng Wu
...
Yifei Zheng
Mehdi Rezagholizadeh
Baoxing Huai
Peilun Cheng
Abbas Ghaddar
VLM
26
8
0
11 Jun 2023
Model-Generated Pretraining Signals Improves Zero-Shot Generalization of
  Text-to-Text Transformers
Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers
Linyuan Gong
Chenyan Xiong
Xiaodong Liu
Payal Bajaj
Yiqing Xie
Alvin Cheung
Jianfeng Gao
Xia Song
VLM
AI4CE
22
2
0
21 May 2023
Trained on 100 million words and still in shape: BERT meets British
  National Corpus
Trained on 100 million words and still in shape: BERT meets British National Corpus
David Samuel
Andrey Kutuzov
Lilja Øvrelid
Erik Velldal
19
27
0
17 Mar 2023
Cramming: Training a Language Model on a Single GPU in One Day
Cramming: Training a Language Model on a Single GPU in One Day
Jonas Geiping
Tom Goldstein
MoE
30
84
0
28 Dec 2022
Toward Efficient Language Model Pretraining and Downstream Adaptation
  via Self-Evolution: A Case Study on SuperGLUE
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
Qihuang Zhong
Liang Ding
Yibing Zhan
Yu Qiao
Yonggang Wen
...
Yixin Chen
Xinbo Gao
Chun Miao
Xiaoou Tang
Dacheng Tao
VLM
ELM
55
34
0
04 Dec 2022
What is Wrong with Language Models that Can Not Tell a Story?
What is Wrong with Language Models that Can Not Tell a Story?
Ivan P. Yamshchikov
Alexey Tikhonov
22
6
0
09 Nov 2022
Beyond English-Centric Bitexts for Better Multilingual Language
  Representation Learning
Beyond English-Centric Bitexts for Better Multilingual Language Representation Learning
Barun Patra
Saksham Singhal
Shaohan Huang
Zewen Chi
Li Dong
Furu Wei
Vishrav Chaudhary
Xia Song
56
23
0
26 Oct 2022
Z-Code++: A Pre-trained Language Model Optimized for Abstractive
  Summarization
Z-Code++: A Pre-trained Language Model Optimized for Abstractive Summarization
Pengcheng He
Baolin Peng
Liyang Lu
Song Wang
Jie Mei
...
Chenguang Zhu
Wayne Xiong
Michael Zeng
Jianfeng Gao
Xuedong Huang
23
47
0
21 Aug 2022
Improving Short Text Classification With Augmented Data Using GPT-3
Improving Short Text Classification With Augmented Data Using GPT-3
Salvador Balkus
Donghui Yan
28
33
0
23 May 2022
Text and Code Embeddings by Contrastive Pre-Training
Text and Code Embeddings by Contrastive Pre-Training
Arvind Neelakantan
Tao Xu
Raul Puri
Alec Radford
Jesse Michael Han
...
Tabarak Khan
Toki Sherbakov
Joanne Jang
Peter Welinder
Lilian Weng
SSL
AI4TS
218
422
0
24 Jan 2022
Unsupervised Corpus Aware Language Model Pre-training for Dense Passage
  Retrieval
Unsupervised Corpus Aware Language Model Pre-training for Dense Passage Retrieval
Luyu Gao
Jamie Callan
RALM
166
329
0
12 Aug 2021
COCO-LM: Correcting and Contrasting Text Sequences for Language Model
  Pretraining
COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining
Yu Meng
Chenyan Xiong
Payal Bajaj
Saurabh Tiwary
Paul N. Bennett
Jiawei Han
Xia Song
125
202
0
16 Feb 2021
Posterior Differential Regularization with f-divergence for Improving
  Model Robustness
Posterior Differential Regularization with f-divergence for Improving Model Robustness
Hao Cheng
Xiaodong Liu
L. Pereira
Yaoliang Yu
Jianfeng Gao
248
31
0
23 Oct 2020
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
243
4,469
0
23 Jan 2020
Megatron-LM: Training Multi-Billion Parameter Language Models Using
  Model Parallelism
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
245
1,821
0
17 Sep 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,959
0
20 Apr 2018
1