ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.00068
67
2
v1v2v3 (latest)

On the Inconsistencies of Conditionals Learned by Masked Language Models

30 December 2022
Tom Young
Yunan Chen
ArXiv (abs)PDFHTML
Abstract

Learning to predict masked tokens in a sequence has been shown to be a powerful pretraining objective for large language models. After training, such masked language models can provide distributions of tokens conditioned on bidirectional context. In this paper, we show that contrary to popular assumptions, such bidirectional conditionals often demonstrate considerable inconsistencies, i.e., they cannot be derived from a coherent joint distribution when considered together. We empirically quantify such inconsistencies in the simple scenario of bigram comparison for two common styles of masked language models: T5-style and BERT-style. For example, we show that T5 models often confuse their own preference regarding two similar bigrams. We show that inconsistencies exist ubiquitously in masked language models of diverse sizes and configurations, from RoBERTa-base to GLM-130B. As an initial attempt to address this issue during the inference phase, we propose Ensemble of Conditionals, a self-ensemble algorithm that jointly considers many inconsistent conditionals directly produced by the MLM to synthesize a distribution that is used as the model's final output. Such ensembling improves open-source SOTA results on LAMBADA.

View on arXiv
Comments on this paper