ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.00306
49
0

Riddle Me This! Stealthy Membership Inference for Retrieval-Augmented Generation

1 February 2025
A. Naseh
Yuefeng Peng
Anshuman Suri
Harsh Chaudhari
Alina Oprea
Amir Houmansadr
    SILM
    AAML
    RALM
ArXivPDFHTML
Abstract

Retrieval-Augmented Generation (RAG) enables Large Language Models (LLMs) to generate grounded responses by leveraging external knowledge databases without altering model parameters. Although the absence of weight tuning prevents leakage via model parameters, it introduces the risk of inference adversaries exploiting retrieved documents in the model's context. Existing methods for membership inference and data extraction often rely on jailbreaking or carefully crafted unnatural queries, which can be easily detected or thwarted with query rewriting techniques common in RAG systems. In this work, we present Interrogation Attack (IA), a membership inference technique targeting documents in the RAG datastore. By crafting natural-text queries that are answerable only with the target document's presence, our approach demonstrates successful inference with just 30 queries while remaining stealthy; straightforward detectors identify adversarial prompts from existing methods up to ~76x more frequently than those generated by our attack. We observe a 2x improvement in TPR@1%FPR over prior inference attacks across diverse RAG configurations, all while costing less than 0.02perdocumentinference.0.02 per document inference.0.02perdocumentinference.

View on arXiv
@article{naseh2025_2502.00306,
  title={ Riddle Me This! Stealthy Membership Inference for Retrieval-Augmented Generation },
  author={ Ali Naseh and Yuefeng Peng and Anshuman Suri and Harsh Chaudhari and Alina Oprea and Amir Houmansadr },
  journal={arXiv preprint arXiv:2502.00306},
  year={ 2025 }
}
Comments on this paper