ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.11798
  4. Cited By
Stolen Memories: Leveraging Model Memorization for Calibrated White-Box
  Membership Inference

Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference

27 June 2019
Klas Leino
Matt Fredrikson
    MIACV
ArXivPDFHTML

Papers citing "Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference"

50 / 66 papers shown
Title
DMRL: Data- and Model-aware Reward Learning for Data Extraction
DMRL: Data- and Model-aware Reward Learning for Data Extraction
Zhiqiang Wang
Ruoxi Cheng
31
0
0
07 May 2025
TeleSparse: Practical Privacy-Preserving Verification of Deep Neural Networks
TeleSparse: Practical Privacy-Preserving Verification of Deep Neural Networks
Mohammad Maheri
Hamed Haddadi
Alex Davidson
74
0
0
27 Apr 2025
Sharpness-Aware Parameter Selection for Machine Unlearning
Sharpness-Aware Parameter Selection for Machine Unlearning
Saber Malekmohammadi
Hong kyu Lee
Li Xiong
MU
229
0
0
08 Apr 2025
Towards Label-Only Membership Inference Attack against Pre-trained Large Language Models
Towards Label-Only Membership Inference Attack against Pre-trained Large Language Models
Yu He
Boheng Li
L. Liu
Zhongjie Ba
Wei Dong
Yiming Li
Zhan Qin
Kui Ren
Cen Chen
MIALM
74
0
0
26 Feb 2025
The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text
The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text
Matthieu Meeus
Lukas Wutschitz
Santiago Zanella Béguelin
Shruti Tople
Reza Shokri
85
0
0
24 Feb 2025
The AI Security Zugzwang
The AI Security Zugzwang
Lampis Alevizos
43
0
0
09 Feb 2025
GRID: Protecting Training Graph from Link Stealing Attacks on GNN Models
GRID: Protecting Training Graph from Link Stealing Attacks on GNN Models
Jiadong Lou
Xu Yuan
Rui Zhang
Xingliang Yuan
Neil Gong
N. Tzeng
AAML
45
1
0
19 Jan 2025
Safeguarding System Prompts for LLMs
Safeguarding System Prompts for LLMs
Zhifeng Jiang
Zhihua Jin
Guoliang He
AAML
SILM
113
1
0
10 Jan 2025
The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD
The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD
Thomas Steinke
Milad Nasr
Arun Ganesh
Borja Balle
Christopher A. Choquette-Choo
Matthew Jagielski
Jamie Hayes
Abhradeep Thakurta
Adam Smith
Andreas Terzis
34
7
0
08 Oct 2024
Label Smoothing Improves Machine Unlearning
Label Smoothing Improves Machine Unlearning
Zonglin Di
Zhaowei Zhu
Jinghan Jia
Jiancheng Liu
Zafar Takhirov
Bo Jiang
Yuanshun Yao
Sijia Liu
Yang Liu
42
2
0
11 Jun 2024
OSLO: One-Shot Label-Only Membership Inference Attacks
OSLO: One-Shot Label-Only Membership Inference Attacks
Yuefeng Peng
Jaechul Roh
Subhransu Maji
Amir Houmansadr
44
0
0
27 May 2024
Noise Masking Attacks and Defenses for Pretrained Speech Models
Noise Masking Attacks and Defenses for Pretrained Speech Models
Matthew Jagielski
Om Thakkar
Lun Wang
AAML
37
5
0
02 Apr 2024
State-of-the-Art Approaches to Enhancing Privacy Preservation of Machine Learning Datasets: A Survey
State-of-the-Art Approaches to Enhancing Privacy Preservation of Machine Learning Datasets: A Survey
Chaoyu Zhang
Shaoyu Li
AILaw
66
3
0
25 Feb 2024
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Wenqi Wei
Ling Liu
31
16
0
02 Feb 2024
Fundamental Limits of Membership Inference Attacks on Machine Learning Models
Fundamental Limits of Membership Inference Attacks on Machine Learning Models
Eric Aubinais
Elisabeth Gassiat
Pablo Piantanida
MIACV
50
2
0
20 Oct 2023
A Survey of What to Share in Federated Learning: Perspectives on Model
  Utility, Privacy Leakage, and Communication Efficiency
A Survey of What to Share in Federated Learning: Perspectives on Model Utility, Privacy Leakage, and Communication Efficiency
Jiawei Shao
Zijian Li
Wenqiang Sun
Tailin Zhou
Yuchang Sun
Lumin Liu
Zehong Lin
Yuyi Mao
Jun Zhang
FedML
45
23
0
20 Jul 2023
Deconstructing Classifiers: Towards A Data Reconstruction Attack Against
  Text Classification Models
Deconstructing Classifiers: Towards A Data Reconstruction Attack Against Text Classification Models
Adel M. Elmahdy
A. Salem
SILM
25
6
0
23 Jun 2023
How Spurious Features Are Memorized: Precise Analysis for Random and NTK
  Features
How Spurious Features Are Memorized: Precise Analysis for Random and NTK Features
Simone Bombari
Marco Mondelli
AAML
42
5
0
20 May 2023
Audit to Forget: A Unified Method to Revoke Patients' Private Data in
  Intelligent Healthcare
Audit to Forget: A Unified Method to Revoke Patients' Private Data in Intelligent Healthcare
Juexiao Zhou
Haoyang Li
Xingyu Liao
Bin Zhang
Wenjia He
Zhongxiao Li
Longxi Zhou
Xin Gao
MU
33
13
0
20 Feb 2023
"Real Attackers Don't Compute Gradients": Bridging the Gap Between
  Adversarial ML Research and Practice
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice
Giovanni Apruzzese
Hyrum S. Anderson
Savino Dambra
D. Freeman
Fabio Pierazzi
Kevin A. Roundy
AAML
31
75
0
29 Dec 2022
SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference
  Privacy in Machine Learning
SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine Learning
A. Salem
Giovanni Cherubin
David Evans
Boris Köpf
Andrew Paverd
Anshuman Suri
Shruti Tople
Santiago Zanella Béguelin
47
35
0
21 Dec 2022
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks
Fine-Tuning Is All You Need to Mitigate Backdoor Attacks
Zeyang Sha
Xinlei He
Pascal Berrang
Mathias Humbert
Yang Zhang
AAML
13
34
0
18 Dec 2022
Membership Inference Attacks Against Semantic Segmentation Models
Membership Inference Attacks Against Semantic Segmentation Models
Tomás Chobola
Dmitrii Usynin
Georgios Kaissis
MIACV
32
6
0
02 Dec 2022
A Survey on Differential Privacy with Machine Learning and Future
  Outlook
A Survey on Differential Privacy with Machine Learning and Future Outlook
Samah Baraheem
Z. Yao
SyDa
24
1
0
19 Nov 2022
Verifiable and Provably Secure Machine Unlearning
Verifiable and Provably Secure Machine Unlearning
Thorsten Eisenhofer
Doreen Riepel
Varun Chandrasekaran
Esha Ghosh
O. Ohrimenko
Nicolas Papernot
AAML
MU
41
26
0
17 Oct 2022
Decompiling x86 Deep Neural Network Executables
Decompiling x86 Deep Neural Network Executables
Zhibo Liu
Yuanyuan Yuan
Shuai Wang
Xiaofei Xie
Lei Ma
AAML
45
13
0
03 Oct 2022
Privacy Attacks Against Biometric Models with Fewer Samples:
  Incorporating the Output of Multiple Models
Privacy Attacks Against Biometric Models with Fewer Samples: Incorporating the Output of Multiple Models
Sohaib Ahmad
Benjamin Fuller
Kaleel Mahmood
AAML
27
0
0
22 Sep 2022
Membership Inference Attacks and Generalization: A Causal Perspective
Membership Inference Attacks and Generalization: A Causal Perspective
Teodora Baluta
Shiqi Shen
S. Hitarth
Shruti Tople
Prateek Saxena
OOD
MIACV
42
18
0
18 Sep 2022
Membership Inference Attacks by Exploiting Loss Trajectory
Membership Inference Attacks by Exploiting Loss Trajectory
Yiyong Liu
Zhengyu Zhao
Michael Backes
Yang Zhang
27
98
0
31 Aug 2022
Data Isotopes for Data Provenance in DNNs
Data Isotopes for Data Provenance in DNNs
Emily Wenger
Xiuyu Li
Ben Y. Zhao
Vitaly Shmatikov
20
12
0
29 Aug 2022
Membership-Doctor: Comprehensive Assessment of Membership Inference
  Against Machine Learning Models
Membership-Doctor: Comprehensive Assessment of Membership Inference Against Machine Learning Models
Xinlei He
Zheng Li
Weilin Xu
Cory Cornelius
Yang Zhang
MIACV
38
24
0
22 Aug 2022
MOVE: Effective and Harmless Ownership Verification via Embedded External Features
MOVE: Effective and Harmless Ownership Verification via Embedded External Features
Yiming Li
Linghui Zhu
Xiaojun Jia
Yang Bai
Yong Jiang
Shutao Xia
Xiaochun Cao
Kui Ren
AAML
46
12
0
04 Aug 2022
The Privacy Onion Effect: Memorization is Relative
The Privacy Onion Effect: Memorization is Relative
Nicholas Carlini
Matthew Jagielski
Chiyuan Zhang
Nicolas Papernot
Andreas Terzis
Florian Tramèr
PILM
MIACV
35
102
0
21 Jun 2022
Edge Security: Challenges and Issues
Edge Security: Challenges and Issues
Xin Jin
Charalampos Katsis
Fan Sang
Jiahao Sun
A. Kundu
Ramana Rao Kompella
49
8
0
14 Jun 2022
NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference
  Attacks
NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks
Nuo Xu
Binghui Wang
Ran Ran
Wujie Wen
Parv Venkitasubramaniam
AAML
26
5
0
11 Jun 2022
Membership Inference Attack Using Self Influence Functions
Membership Inference Attack Using Self Influence Functions
Gilad Cohen
Raja Giryes
TDI
32
12
0
26 May 2022
Evaluating Membership Inference Through Adversarial Robustness
Evaluating Membership Inference Through Adversarial Robustness
Zhaoxi Zhang
L. Zhang
Xufei Zheng
Bilal Hussain Abbasi
Shengshan Hu
AAML
57
14
0
14 May 2022
How to Combine Membership-Inference Attacks on Multiple Updated Models
How to Combine Membership-Inference Attacks on Multiple Updated Models
Matthew Jagielski
Stanley Wu
Alina Oprea
Jonathan R. Ullman
Roxana Geambasu
29
10
0
12 May 2022
You Are What You Write: Preserving Privacy in the Era of Large Language
  Models
You Are What You Write: Preserving Privacy in the Era of Large Language Models
Richard Plant
V. Giuffrida
Dimitra Gkatzia
PILM
38
19
0
20 Apr 2022
Do Language Models Plagiarize?
Do Language Models Plagiarize?
Jooyoung Lee
Thai Le
Jinghui Chen
Dongwon Lee
38
74
0
15 Mar 2022
Understanding Rare Spurious Correlations in Neural Networks
Understanding Rare Spurious Correlations in Neural Networks
Yao-Yuan Yang
Chi-Ning Chou
Kamalika Chaudhuri
AAML
26
25
0
10 Feb 2022
Membership Inference Attacks and Defenses in Neural Network Pruning
Membership Inference Attacks and Defenses in Neural Network Pruning
Xiaoyong Yuan
Lan Zhang
AAML
24
44
0
07 Feb 2022
Membership Inference Attacks From First Principles
Membership Inference Attacks From First Principles
Nicholas Carlini
Steve Chien
Milad Nasr
Shuang Song
Andreas Terzis
Florian Tramèr
MIACV
MIALM
29
646
0
07 Dec 2021
Defending against Model Stealing via Verifying Embedded External
  Features
Defending against Model Stealing via Verifying Embedded External Features
Yiming Li
Linghui Zhu
Xiaojun Jia
Yong Jiang
Shutao Xia
Xiaochun Cao
AAML
43
61
0
07 Dec 2021
SHAPr: An Efficient and Versatile Membership Privacy Risk Metric for
  Machine Learning
SHAPr: An Efficient and Versatile Membership Privacy Risk Metric for Machine Learning
Vasisht Duddu
S. Szyller
Nadarajah Asokan
32
12
0
04 Dec 2021
Enhanced Membership Inference Attacks against Machine Learning Models
Enhanced Membership Inference Attacks against Machine Learning Models
Jiayuan Ye
Aadyaa Maddi
S. K. Murakonda
Vincent Bindschaedler
Reza Shokri
MIALM
MIACV
27
233
0
18 Nov 2021
Property Inference Attacks Against GANs
Property Inference Attacks Against GANs
Junhao Zhou
Yufei Chen
Chao Shen
Yang Zhang
AAML
MIACV
30
52
0
15 Nov 2021
SoK: Machine Learning Governance
SoK: Machine Learning Governance
Varun Chandrasekaran
Hengrui Jia
Anvith Thudi
Adelin Travers
Mohammad Yaghini
Nicolas Papernot
40
16
0
20 Sep 2021
Membership Inference Attacks Against Recommender Systems
Membership Inference Attacks Against Recommender Systems
Minxing Zhang
Z. Ren
Zihan Wang
Pengjie Ren
Zhumin Chen
Pengfei Hu
Yang Zhang
MIACV
AAML
26
83
0
16 Sep 2021
Membership Inference Attack and Defense for Wireless Signal Classifiers
  with Deep Learning
Membership Inference Attack and Defense for Wireless Signal Classifiers with Deep Learning
Yi Shi
Y. Sagduyu
19
16
0
22 Jul 2021
12
Next