ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.09297
  4. Cited By
MPNet: Masked and Permuted Pre-training for Language Understanding

MPNet: Masked and Permuted Pre-training for Language Understanding

20 April 2020
Kaitao Song
Xu Tan
Tao Qin
Jianfeng Lu
Tie-Yan Liu
ArXivPDFHTML

Papers citing "MPNet: Masked and Permuted Pre-training for Language Understanding"

14 / 164 papers shown
Title
Explainable Fact-checking through Question Answering
Explainable Fact-checking through Question Answering
Jing Yang
D. Vega-Oliveros
Taís Seibt
Anderson de Rezende Rocha
HILM
27
14
0
11 Oct 2021
Automatic Generation of Word Problems for Academic Education via Natural
  Language Processing (NLP)
Automatic Generation of Word Problems for Academic Education via Natural Language Processing (NLP)
Stanley Uros Keller
30
7
0
27 Sep 2021
Multiplicative Position-aware Transformer Models for Language
  Understanding
Multiplicative Position-aware Transformer Models for Language Understanding
Zhiheng Huang
Davis Liang
Peng Xu
Bing Xiang
9
1
0
27 Sep 2021
Small-Bench NLP: Benchmark for small single GPU trained models in
  Natural Language Processing
Small-Bench NLP: Benchmark for small single GPU trained models in Natural Language Processing
K. Kanakarajan
Bhuvana Kundumani
Malaikannan Sankarasubbu
ALM
MoE
11
5
0
22 Sep 2021
Exploring the Promises of Transformer-Based LMs for the Representation
  of Normative Claims in the Legal Domain
Exploring the Promises of Transformer-Based LMs for the Representation of Normative Claims in the Legal Domain
Reto Gubelmann
Peter Hongler
Siegfried Handschuh
AILaw
14
0
0
25 Aug 2021
Improved Text Classification via Contrastive Adversarial Training
Improved Text Classification via Contrastive Adversarial Training
Lin Pan
Chung-Wei Hang
Avirup Sil
Saloni Potdar
AAML
26
86
0
21 Jul 2021
SCARF: Self-Supervised Contrastive Learning using Random Feature
  Corruption
SCARF: Self-Supervised Contrastive Learning using Random Feature Corruption
Dara Bahri
Heinrich Jiang
Yi Tay
Donald Metzler
SSL
21
163
0
29 Jun 2021
Pre-Trained Models: Past, Present and Future
Pre-Trained Models: Past, Present and Future
Xu Han
Zhengyan Zhang
Ning Ding
Yuxian Gu
Xiao Liu
...
Jie Tang
Ji-Rong Wen
Jinhui Yuan
Wayne Xin Zhao
Jun Zhu
AIFin
MQ
AI4MH
58
815
0
14 Jun 2021
MusicBERT: Symbolic Music Understanding with Large-Scale Pre-Training
MusicBERT: Symbolic Music Understanding with Large-Scale Pre-Training
Mingliang Zeng
Xu Tan
Rui Wang
Zeqian Ju
Tao Qin
Tie-Yan Liu
19
128
0
10 Jun 2021
Diverse Image Inpainting with Bidirectional and Autoregressive
  Transformers
Diverse Image Inpainting with Bidirectional and Autoregressive Transformers
Yingchen Yu
Fangneng Zhan
Rongliang Wu
Jianxiong Pan
Kaiwen Cui
Shijian Lu
Feiying Ma
Xuansong Xie
Chunyan Miao
ViT
40
150
0
26 Apr 2021
ERNIE-Gram: Pre-Training with Explicitly N-Gram Masked Language Modeling
  for Natural Language Understanding
ERNIE-Gram: Pre-Training with Explicitly N-Gram Masked Language Modeling for Natural Language Understanding
Dongling Xiao
Yukun Li
Han Zhang
Yu Sun
Hao Tian
Hua Wu
Haifeng Wang
27
38
0
23 Oct 2020
Improve Transformer Models with Better Relative Position Embeddings
Improve Transformer Models with Better Relative Position Embeddings
Zhiheng Huang
Davis Liang
Peng Xu
Bing Xiang
ViT
15
127
0
28 Sep 2020
Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
  Language Processing
Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing
Zihang Dai
Guokun Lai
Yiming Yang
Quoc V. Le
48
230
0
05 Jun 2020
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
299
6,984
0
20 Apr 2018
Previous
1234