28
0

PhantomHunter: Detecting Unseen Privately-Tuned LLM-Generated Text via Family-Aware Learning

Yuhui Shi
Yehan Yang
Qiang Sheng
Hao Mi
Beizhe Hu
Chaoxi Xu
Juan Cao
Main:12 Pages
3 Figures
Bibliography:5 Pages
7 Tables
Abstract

With the popularity of large language models (LLMs), undesirable societal problems like misinformation production and academic misconduct have been more severe, making LLM-generated text detection now of unprecedented importance. Although existing methods have made remarkable progress, a new challenge posed by text from privately tuned LLMs remains underexplored. Users could easily possess private LLMs by fine-tuning an open-source one with private corpora, resulting in a significant performance drop of existing detectors in practice. To address this issue, we propose PhantomHunter, an LLM-generated text detector specialized for detecting text from unseen, privately-tuned LLMs. Its family-aware learning framework captures family-level traits shared across the base models and their derivatives, instead of memorizing individual characteristics. Experiments on data from LLaMA, Gemma, and Mistral families show its superiority over 7 baselines and 3 industrial services, with F1 scores of over 96%.

View on arXiv
@article{shi2025_2506.15683,
  title={ PhantomHunter: Detecting Unseen Privately-Tuned LLM-Generated Text via Family-Aware Learning },
  author={ Yuhui Shi and Yehan Yang and Qiang Sheng and Hao Mi and Beizhe Hu and Chaoxi Xu and Juan Cao },
  journal={arXiv preprint arXiv:2506.15683},
  year={ 2025 }
}
Comments on this paper