ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.11483
47
3
v1v2v3v4v5v6v7v8v9 (latest)

A Comprehensive Comparison of Pre-training Language Models

22 June 2021
Tonglei Guo
    VLMELM
ArXiv (abs)PDFHTML
Abstract

Recently, the development of pre-trained language models has brought natural language processing (NLP) tasks to the new state-of-the-art. In this paper we explore the efficiency of various pre-trained language models. We pre-train a list of transformer-based models with the same amount of text and the same training steps. The experimental results shows that the most improvement upon the origin BERT is adding the RNN-layer to capture more contextual information for short text understanding. But there are no remarkable improvement for short text understanding for similar BERT structures.

View on arXiv
Comments on this paper