ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.06037
34
0

Confidence Regularized Masked Language Modeling using Text Length

8 April 2025
Seunghyun Ji
Soowon Lee
ArXivPDFHTML
Abstract

Masked language modeling is a widely used method for learning language representations, where the model predicts a randomly masked word in each input. However, this approach typically considers only a single correct answer during training, ignoring the variety of plausible alternatives that humans might choose. This issue becomes more pronounced when the input text is short, as the possible word distribution tends to have higher entropy, potentially causing the model to become overconfident in its predictions. To mitigate this, we propose a novel confidence regularizer that adaptively adjusts the regularization strength based on the input length. Experiments on the GLUE and SQuAD benchmarks show that our method improves both accuracy and expected calibration error

View on arXiv
@article{ji2025_2504.06037,
  title={ Confidence Regularized Masked Language Modeling using Text Length },
  author={ Seunghyun Ji and Soowon Lee },
  journal={arXiv preprint arXiv:2504.06037},
  year={ 2025 }
}
Comments on this paper