ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.06761
15
0
v1v2 (latest)

The OCR Quest for Generalization: Learning to recognize low-resource alphabets with model editing

7 June 2025
Adrià Molina Rodríguez
O. R. Terrades
Josep Lladós
ArXiv (abs)PDFHTML
Main:21 Pages
8 Figures
Bibliography:7 Pages
4 Tables
Abstract

Achieving robustness in recognition systems across diverse domains is crucial for their practical utility. While ample data availability is usually assumed, low-resource languages, such as ancient manuscripts and non-western languages, tend to be kept out of the equations of massive pretraining and foundational techniques due to an under representation. In this work, we aim for building models which can generalize to new distributions of data, such as alphabets, faster than centralized fine-tune strategies. For doing so, we take advantage of the recent advancements in model editing to enhance the incorporation of unseen scripts (low-resource learning). In contrast to state-of-the-art meta-learning, we showcase the effectiveness of domain merging in sparse distributions of data, with agnosticity of its relation to the overall distribution or any other prototyping necessity. Even when using the same exact training data, our experiments showcase significant performance boosts in \textbf{transfer learning} to new alphabets and \textbf{out-of-domain evaluation} in challenging domain shifts, including historical ciphered texts and non-Latin scripts. This research contributes a novel approach into building models that can easily adopt under-represented alphabets and, therefore, enable document recognition to a wider set of contexts and cultures.

View on arXiv
@article{rodríguez2025_2506.06761,
  title={ The OCR Quest for Generalization: Learning to recognize low-resource alphabets with model editing },
  author={ Adrià Molina Rodríguez and Oriol Ramos Terrades and Josep Lladós },
  journal={arXiv preprint arXiv:2506.06761},
  year={ 2025 }
}
Comments on this paper