43

Parameter-Efficient Fine-Tuning of DINOv2 for Large-Scale Font Classification

Daniel Chen
Zaria Zinn
Marcus Lowe
Main:8 Pages
7 Figures
Bibliography:1 Pages
3 Tables
Appendix:7 Pages
Abstract

We present a font classification system capable of identifying 394 font families from rendered text images. Our approach fine-tunes a DINOv2 Vision Transformer using Low-Rank Adaptation (LoRA), achieving approximately 86% top-1 accuracy while training fewer than 1% of the model's 87.2M parameters. We introduce a synthetic dataset generation pipeline that renders Google Fonts at scale with diverse augmentations including randomized colors, alignment, line wrapping, and Gaussian noise, producing training images that generalize to real-world typographic samples. The model incorporates built-in preprocessing to ensure consistency between training and inference, and is deployed as a HuggingFace Inference Endpoint. We release the model, dataset, and full training pipeline as open-source resources.

View on arXiv
Comments on this paper