ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.05227
93
0

Improving Low-Resource Morphological Inflection via Self-Supervised Objectives

5 June 2025
Adam Wiemerslage
Katharina von der Wense
ArXiv (abs)PDFHTML
Abstract

Self-supervised objectives have driven major advances in NLP by leveraging large-scale unlabeled data, but such resources are scarce for many of the world's languages. Surprisingly, they have not been explored much for character-level tasks, where smaller amounts of data have the potential to be beneficial. We investigate the effectiveness of self-supervised auxiliary tasks for morphological inflection -- a character-level task highly relevant for language documentation -- in extremely low-resource settings, training encoder-decoder transformers for 19 languages and 13 auxiliary objectives. Autoencoding yields the best performance when unlabeled data is very limited, while character masked language modeling (CMLM) becomes more effective as data availability increases. Though objectives with stronger inductive biases influence model predictions intuitively, they rarely outperform standard CMLM. However, sampling masks based on known morpheme boundaries consistently improves performance, highlighting a promising direction for low-resource morphological modeling.

View on arXiv
@article{wiemerslage2025_2506.05227,
  title={ Improving Low-Resource Morphological Inflection via Self-Supervised Objectives },
  author={ Adam Wiemerslage and Katharina von der Wense },
  journal={arXiv preprint arXiv:2506.05227},
  year={ 2025 }
}
Comments on this paper