ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.12804
  4. Cited By
UniLMv2: Pseudo-Masked Language Models for Unified Language Model
  Pre-Training

UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training

28 February 2020
Hangbo Bao
Li Dong
Furu Wei
Wenhui Wang
Nan Yang
Xiaodong Liu
Yu-Chiang Frank Wang
Songhao Piao
Jianfeng Gao
Ming Zhou
H. Hon
    AI4CE
ArXivPDFHTML

Papers citing "UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training"

20 / 220 papers shown
Title
An Enhanced Knowledge Injection Model for Commonsense Generation
An Enhanced Knowledge Injection Model for Commonsense Generation
Zhihao Fan
Yeyun Gong
Zhongyu Wei
Siyuan Wang
Ya-Chieh Huang
Jian Jiao
Xuanjing Huang
Nan Duan
Ruofei Zhang
23
27
0
01 Dec 2020
Neural Semi-supervised Learning for Text Classification Under
  Large-Scale Pretraining
Neural Semi-supervised Learning for Text Classification Under Large-Scale Pretraining
Zijun Sun
Chun Fan
Xiaofei Sun
Yuxian Meng
Fei Wu
Jiwei Li
27
9
0
17 Nov 2020
MixKD: Towards Efficient Distillation of Large-scale Language Models
MixKD: Towards Efficient Distillation of Large-scale Language Models
Kevin J Liang
Weituo Hao
Dinghan Shen
Yufan Zhou
Weizhu Chen
Changyou Chen
Lawrence Carin
13
73
0
01 Nov 2020
NeuroLogic Decoding: (Un)supervised Neural Text Generation with
  Predicate Logic Constraints
NeuroLogic Decoding: (Un)supervised Neural Text Generation with Predicate Logic Constraints
Ximing Lu
Peter West
Rowan Zellers
Ronan Le Bras
Chandra Bhagavatula
Yejin Choi
NAI
10
142
0
24 Oct 2020
Open-Domain Dialogue Generation Based on Pre-trained Language Models
Open-Domain Dialogue Generation Based on Pre-trained Language Models
Yan Zeng
J. Nie
15
3
0
24 Oct 2020
ERNIE-Gram: Pre-Training with Explicitly N-Gram Masked Language Modeling
  for Natural Language Understanding
ERNIE-Gram: Pre-Training with Explicitly N-Gram Masked Language Modeling for Natural Language Understanding
Dongling Xiao
Yukun Li
Han Zhang
Yu Sun
Hao Tian
Hua Wu
Haifeng Wang
27
38
0
23 Oct 2020
Summarize, Outline, and Elaborate: Long-Text Generation via Hierarchical
  Supervision from Extractive Summaries
Summarize, Outline, and Elaborate: Long-Text Generation via Hierarchical Supervision from Extractive Summaries
Xiaofei Sun
Zijun Sun
Yuxian Meng
Jiwei Li
Chun Fan
11
18
0
14 Oct 2020
Improving Self-supervised Pre-training via a Fully-Explored Masked
  Language Model
Improving Self-supervised Pre-training via a Fully-Explored Masked Language Model
Ming Zheng
Dinghan Shen
Yelong Shen
Weizhu Chen
Lin Xiao
SSL
18
4
0
12 Oct 2020
Which *BERT? A Survey Organizing Contextualized Encoders
Which *BERT? A Survey Organizing Contextualized Encoders
Patrick Xia
Shijie Wu
Benjamin Van Durme
26
50
0
02 Oct 2020
A Simple but Tough-to-Beat Data Augmentation Approach for Natural
  Language Understanding and Generation
A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation
Dinghan Shen
Ming Zheng
Yelong Shen
Yanru Qu
Weizhu Chen
AAML
29
130
0
29 Sep 2020
KG-BART: Knowledge Graph-Augmented BART for Generative Commonsense
  Reasoning
KG-BART: Knowledge Graph-Augmented BART for Generative Commonsense Reasoning
Ye Liu
Yao Wan
Lifang He
Hao Peng
Philip S. Yu
32
188
0
26 Sep 2020
RecoBERT: A Catalog Language Model for Text-Based Recommendations
RecoBERT: A Catalog Language Model for Text-Based Recommendations
Itzik Malkiel
Oren Barkan
Avi Caciularu
Noam Razin
Ori Katz
Noam Koenigstein
12
13
0
25 Sep 2020
X-LXMERT: Paint, Caption and Answer Questions with Multi-Modal
  Transformers
X-LXMERT: Paint, Caption and Answer Questions with Multi-Modal Transformers
Jaemin Cho
Jiasen Lu
Dustin Schwenk
Hannaneh Hajishirzi
Aniruddha Kembhavi
VLM
MLLM
30
102
0
23 Sep 2020
Efficient Transformer-based Large Scale Language Representations using
  Hardware-friendly Block Structured Pruning
Efficient Transformer-based Large Scale Language Representations using Hardware-friendly Block Structured Pruning
Bingbing Li
Zhenglun Kong
Tianyun Zhang
Ji Li
ZeLin Li
Hang Liu
Caiwen Ding
VLM
32
64
0
17 Sep 2020
Adversarial Training for Large Neural Language Models
Adversarial Training for Large Neural Language Models
Xiaodong Liu
Hao Cheng
Pengcheng He
Weizhu Chen
Yu-Chiang Frank Wang
Hoifung Poon
Jianfeng Gao
AAML
34
183
0
20 Apr 2020
Pre-trained Models for Natural Language Processing: A Survey
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
243
1,452
0
18 Mar 2020
MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression
  of Pre-Trained Transformers
MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers
Wenhui Wang
Furu Wei
Li Dong
Hangbo Bao
Nan Yang
Ming Zhou
VLM
47
1,209
0
25 Feb 2020
Text Summarization with Pretrained Encoders
Text Summarization with Pretrained Encoders
Yang Liu
Mirella Lapata
MILM
258
1,435
0
22 Aug 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
299
6,984
0
20 Apr 2018
Google's Neural Machine Translation System: Bridging the Gap between
  Human and Machine Translation
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
718
6,748
0
26 Sep 2016
Previous
12345