7
0

Semantic Outlier Removal with Embedding Models and LLMs

Main:6 Pages
3 Figures
Bibliography:1 Pages
4 Tables
Appendix:3 Pages
Abstract

Modern text processing pipelines demand robust methods to remove extraneous content while preserving a document's core message. Traditional approaches such as HTML boilerplate extraction or keyword filters often fail in multilingual settings and struggle with context-sensitive nuances, whereas Large Language Models (LLMs) offer improved quality at high computational cost. We introduce SORE (Semantic Outlier Removal), a cost-effective, transparent method that leverages multilingual sentence embeddings and approximate nearest-neighbor search to identify and excise unwanted text segments. By first identifying core content via metadata embedding and then flagging segments that either closely match predefined outlier groups or deviate significantly from the core, SORE achieves near-LLM extraction precision at a fraction of the cost. Experiments on HTML datasets demonstrate that SORE outperforms structural methods and yield high precision in diverse scenarios. Our system is currently deployed in production, processing millions of documents daily across multiple languages while maintaining both efficiency and accuracy. To facilitate reproducibility and further research, we release our implementation and evaluation datasets.

View on arXiv
@article{akbiyik2025_2506.16644,
  title={ Semantic Outlier Removal with Embedding Models and LLMs },
  author={ Eren Akbiyik and João Almeida and Rik Melis and Ritu Sriram and Viviana Petrescu and Vilhjálmur Vilhjálmsson },
  journal={arXiv preprint arXiv:2506.16644},
  year={ 2025 }
}
Comments on this paper