7
0

The Devil is in Fine-tuning and Long-tailed Problems:A New Benchmark for Scene Text Detection

Abstract

Scene text detection has seen the emergence of high-performing methods that excel on academic benchmarks. However, these detectors often fail to replicate such success in real-world scenarios. We uncover two key factors contributing to this discrepancy through extensive experiments. First, a \textit{Fine-tuning Gap}, where models leverage \textit{Dataset-Specific Optimization} (DSO) paradigm for one domain at the cost of reduced effectiveness in others, leads to inflated performances on academic benchmarks. Second, the suboptimal performance in practical settings is primarily attributed to the long-tailed distribution of texts, where detectors struggle with rare and complex categories as artistic or overlapped text. Given that the DSO paradigm might undermine the generalization ability of models, we advocate for a \textit{Joint-Dataset Learning} (JDL) protocol to alleviate the Fine-tuning Gap. Additionally, an error analysis is conducted to identify three major categories and 13 subcategories of challenges in long-tailed scene text, upon which we propose a Long-Tailed Benchmark (LTB). LTB facilitates a comprehensive evaluation of ability to handle a diverse range of long-tailed challenges. We further introduce MAEDet, a self-supervised learning-based method, as a strong baseline for LTB. The code is available atthis https URL.

View on arXiv
@article{cao2025_2505.15649,
  title={ The Devil is in Fine-tuning and Long-tailed Problems:A New Benchmark for Scene Text Detection },
  author={ Tianjiao Cao and Jiahao Lyu and Weichao Zeng and Weimin Mu and Yu Zhou },
  journal={arXiv preprint arXiv:2505.15649},
  year={ 2025 }
}
Comments on this paper