ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.05126
99
0

Membership Inference Attacks on Sequence Models

5 June 2025
Lorenzo Rossi
Michael Aerni
Jie Zhang
F. Tramèr
ArXiv (abs)PDFHTML
Abstract

Sequence models, such as Large Language Models (LLMs) and autoregressive image generators, have a tendency to memorize and inadvertently leak sensitive information. While this tendency has critical legal implications, existing tools are insufficient to audit the resulting risks. We hypothesize that those tools' shortcomings are due to mismatched assumptions. Thus, we argue that effectively measuring privacy leakage in sequence models requires leveraging the correlations inherent in sequential generation. To illustrate this, we adapt a state-of-the-art membership inference attack to explicitly model within-sequence correlations, thereby demonstrating how a strong existing attack can be naturally extended to suit the structure of sequence models. Through a case study, we show that our adaptations consistently improve the effectiveness of memorization audits without introducing additional computational costs. Our work hence serves as an important stepping stone toward reliable memorization audits for large sequence models.

View on arXiv
@article{rossi2025_2506.05126,
  title={ Membership Inference Attacks on Sequence Models },
  author={ Lorenzo Rossi and Michael Aerni and Jie Zhang and Florian Tramèr },
  journal={arXiv preprint arXiv:2506.05126},
  year={ 2025 }
}
Main:5 Pages
16 Figures
Bibliography:3 Pages
3 Tables
Appendix:5 Pages
Comments on this paper