ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.12335
35
0

You've Changed: Detecting Modification of Black-Box Large Language Models

14 April 2025
Alden Dima
James R. Foulds
Shimei Pan
Philip G. Feldman
ArXivPDFHTML
Abstract

Large Language Models (LLMs) are often provided as a service via an API, making it challenging for developers to detect changes in their behavior. We present an approach to monitor LLMs for changes by comparing the distributions of linguistic and psycholinguistic features of generated text. Our method uses a statistical test to determine whether the distributions of features from two samples of text are equivalent, allowing developers to identify when an LLM has changed. We demonstrate the effectiveness of our approach using five OpenAI completion models and Meta's Llama 3 70B chat model. Our results show that simple text features coupled with a statistical test can distinguish between language models. We also explore the use of our approach to detect prompt injection attacks. Our work enables frequent LLM change monitoring and avoids computationally expensive benchmark evaluations.

View on arXiv
@article{dima2025_2504.12335,
  title={ You've Changed: Detecting Modification of Black-Box Large Language Models },
  author={ Alden Dima and James Foulds and Shimei Pan and Philip Feldman },
  journal={arXiv preprint arXiv:2504.12335},
  year={ 2025 }
}
Comments on this paper