12
0

No Free Lunch in Active Learning: LLM Embedding Quality Dictates Query Strategy Success

Main:2 Pages
10 Figures
5 Tables
Appendix:14 Pages
Abstract

The advent of large language models (LLMs) capable of producing general-purpose representations lets us revisit the practicality of deep active learning (AL): By leveraging frozen LLM embeddings, we can mitigate the computational costs of iteratively fine-tuning large backbones. This study establishes a benchmark and systematically investigates the influence of LLM embedding quality on query strategies in deep AL. We employ five top-performing models from the massive text embedding benchmark (MTEB) leaderboard and two baselines for ten diverse text classification tasks. Our findings reveal key insights: First, initializing the labeled pool using diversity-based sampling synergizes with high-quality embeddings, boosting performance in early AL iterations. Second, the choice of the optimal query strategy is sensitive to embedding quality. While the computationally inexpensive Margin sampling can achieve performance spikes on specific datasets, we find that strategies like Badge exhibit greater robustness across tasks. Importantly, their effectiveness is often enhanced when paired with higher-quality embeddings. Our results emphasize the need for context-specific evaluation of AL strategies, as performance heavily depends on embedding quality and the target task.

View on arXiv
@article{rauch2025_2506.01992,
  title={ No Free Lunch in Active Learning: LLM Embedding Quality Dictates Query Strategy Success },
  author={ Lukas Rauch and Moritz Wirth and Denis Huseljic and Marek Herde and Bernhard Sick and Matthias Aßenmacher },
  journal={arXiv preprint arXiv:2506.01992},
  year={ 2025 }
}
Comments on this paper