13
0

From LLM-anation to LLM-orchestrator: Coordinating Small Models for Data Labeling

Main:8 Pages
4 Figures
Bibliography:3 Pages
9 Tables
Appendix:5 Pages
Abstract

Although the annotation paradigm based on Large Language Models (LLMs) has made significant breakthroughs in recent years, its actual deployment still has two core bottlenecks: first, the cost of calling commercial APIs in large-scale annotation is very expensive; second, in scenarios that require fine-grained semantic understanding, such as sentiment classification and toxicity classification, the annotation accuracy of LLMs is even lower than that of Small Language Models (SLMs) dedicated to this field. To address these problems, we propose a new paradigm of multi-model cooperative annotation and design a fully automatic annotation framework AutoAnnotator based on this. Specifically, AutoAnnotator consists of two layers. The upper-level meta-controller layer uses the generation and reasoning capabilities of LLMs to select SLMs for annotation, automatically generate annotation code and verify difficult samples; the lower-level task-specialist layer consists of multiple SLMs that perform annotation through multi-model voting. In addition, we use the difficult samples obtained by the secondary review of the meta-controller layer as the reinforcement learning set and fine-tune the SLMs in stages through a continual learning strategy, thereby improving the generalization of SLMs. Extensive experiments show that AutoAnnotator outperforms existing open-source/API LLMs in zero-shot, one-shot, CoT, and majority voting settings. Notably, AutoAnnotator reduces the annotation cost by 74.15% compared to directly annotating with GPT-3.5-turbo, while still improving the accuracy by 6.21%. Project page:this https URL.

View on arXiv
@article{lu2025_2506.16393,
  title={ From LLM-anation to LLM-orchestrator: Coordinating Small Models for Data Labeling },
  author={ Yao Lu and Zhaiyuan Ji and Jiawei Du and Yu Shanqing and Qi Xuan and Tianyi Zhou },
  journal={arXiv preprint arXiv:2506.16393},
  year={ 2025 }
}
Comments on this paper