Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2204.00032
Cited By
v1
v2 (latest)
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
31 March 2022
Florian Tramèr
Reza Shokri
Ayrton San Joaquin
Hoang Minh Le
Matthew Jagielski
Sanghyun Hong
Nicholas Carlini
MIACV
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets"
50 / 56 papers shown
Title
Covert Attacks on Machine Learning Training in Passively Secure MPC
Matthew Jagielski
Daniel Escudero
Rahul Rachuri
Peter Scholl
85
0
0
21 May 2025
Differentially Private Kernel Density Estimation
Erzhi Liu
Jerry Yao-Chieh Hu
Alex Reneau
Zhao Song
Han Liu
128
3
0
03 Sep 2024
Pre-trained Encoder Inference: Revealing Upstream Encoders In Downstream Machine Learning Services
Shaopeng Fu
Xuexue Sun
Ke Qing
Tianhang Zheng
Di Wang
AAML
MIACV
SILM
113
0
0
05 Aug 2024
Differentially Private Synthetic Data via Foundation Model APIs 1: Images
Zinan Lin
Sivakanth Gopi
Janardhan Kulkarni
Harsha Nori
Sergey Yekhanin
139
44
0
24 May 2023
Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks
Fatemehsadat Mireshghallah
Kartik Goyal
Archit Uniyal
Taylor Berg-Kirkpatrick
Reza Shokri
MIALM
79
166
0
08 Mar 2022
Quantifying Memorization Across Neural Language Models
Nicholas Carlini
Daphne Ippolito
Matthew Jagielski
Katherine Lee
Florian Tramèr
Chiyuan Zhang
PILM
124
630
0
15 Feb 2022
Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification
Yuxin Wen
Jonas Geiping
Liam H. Fowl
Micah Goldblum
Tom Goldstein
FedML
194
97
0
01 Feb 2022
Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models
Liam H. Fowl
Jonas Geiping
Steven Reich
Yuxin Wen
Wojtek Czaja
Micah Goldblum
Tom Goldstein
FedML
117
56
0
29 Jan 2022
Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models
Shagufta Mehnaz
S. V. Dibbo
Ehsanul Kabir
Ninghui Li
E. Bertino
MIACV
83
63
0
23 Jan 2022
Membership Inference Attacks From First Principles
Nicholas Carlini
Steve Chien
Milad Nasr
Shuang Song
Andreas Terzis
Florian Tramèr
MIACV
MIALM
87
709
0
07 Dec 2021
When the Curious Abandon Honesty: Federated Learning Is Not Private
Franziska Boenisch
Adam Dziedzic
R. Schuster
Ali Shahin Shamsabadi
Ilia Shumailov
Nicolas Papernot
FedML
AAML
101
186
0
06 Dec 2021
Enhanced Membership Inference Attacks against Machine Learning Models
Jiayuan Ye
Aadyaa Maddi
S. K. Murakonda
Vincent Bindschaedler
Reza Shokri
MIALM
MIACV
103
255
0
18 Nov 2021
On the Importance of Difficulty Calibration in Membership Inference Attacks
Lauren Watson
Chuan Guo
Graham Cormode
Alex Sablayrolles
101
134
0
15 Nov 2021
Adversarial Examples Make Strong Poisons
Liam H. Fowl
Micah Goldblum
Ping Yeh-Chiang
Jonas Geiping
Wojtek Czaja
Tom Goldstein
SILM
96
136
0
21 Jun 2021
Antipodes of Label Differential Privacy: PATE and ALIBI
Mani Malek
Ilya Mironov
Karthik Prasad
I. Shilov
Florian Tramèr
66
66
0
07 Jun 2021
Learning Transferable Visual Models From Natural Language Supervision
Alec Radford
Jong Wook Kim
Chris Hallacy
Aditya A. Ramesh
Gabriel Goh
...
Amanda Askell
Pamela Mishkin
Jack Clark
Gretchen Krueger
Ilya Sutskever
CLIP
VLM
1.0K
29,926
0
26 Feb 2021
Property Inference From Poisoning
Melissa Chase
Esha Ghosh
Saeed Mahloujifar
MIACV
76
83
0
26 Jan 2021
Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning
Milad Nasr
Shuang Song
Abhradeep Thakurta
Nicolas Papernot
Nicholas Carlini
MIACV
FedML
147
226
0
11 Jan 2021
Unadversarial Examples: Designing Objects for Robust Vision
Hadi Salman
Andrew Ilyas
Logan Engstrom
Sai H. Vemprala
Aleksander Madry
Ashish Kapoor
WIGM
122
59
0
22 Dec 2020
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
Basel Alomair
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
517
1,956
0
14 Dec 2020
Training Production Language Models without Memorizing User Data
Swaroop Indra Ramaswamy
Om Thakkar
Rajiv Mathews
Galen Andrew
H. B. McMahan
Franccoise Beaufays
FedML
72
92
0
21 Sep 2020
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping
Liam H. Fowl
Wenjie Huang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
100
221
0
04 Sep 2020
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion
R. Schuster
Congzheng Song
Eran Tromer
Vitaly Shmatikov
SILM
AAML
127
160
0
05 Jul 2020
Auditing Differentially Private Machine Learning: How Private is Private SGD?
Matthew Jagielski
Jonathan R. Ullman
Alina Oprea
FedML
76
250
0
13 Jun 2020
Revisiting Membership Inference Under Realistic Assumptions
Bargav Jayaraman
Lingxiao Wang
Katherine Knipmeyer
Quanquan Gu
David Evans
69
151
0
21 May 2020
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
FedML
SILM
134
305
0
08 May 2020
Analyzing Information Leakage of Updates to Natural Language Models
Santiago Zanella Béguelin
Lukas Wutschitz
Shruti Tople
Victor Rühle
Andrew Paverd
O. Ohrimenko
Boris Köpf
Marc Brockschmidt
ELM
MIACV
FedML
PILM
KELM
68
127
0
17 Dec 2019
Advances and Open Problems in Federated Learning
Peter Kairouz
H. B. McMahan
Brendan Avent
A. Bellet
M. Bennis
...
Zheng Xu
Qiang Yang
Felix X. Yu
Han Yu
Sen Zhao
FedML
AI4CE
275
6,294
0
10 Dec 2019
Label-Consistent Backdoor Attacks
Alexander Turner
Dimitris Tsipras
Aleksander Madry
AAML
76
389
0
05 Dec 2019
Deep k-NN Defense against Clean-label Data Poisoning Attacks
Neehar Peri
Neal Gupta
Wenjie Huang
Liam H. Fowl
Chen Zhu
Soheil Feizi
Tom Goldstein
John P. Dickerson
AAML
51
6
0
29 Sep 2019
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
Jinyuan Jia
Ahmed Salem
Michael Backes
Yang Zhang
Neil Zhenqiang Gong
87
395
0
23 Sep 2019
White-box vs Black-box: Bayes Optimal Strategies for Membership Inference
Alexandre Sablayrolles
Matthijs Douze
Yann Ollivier
Cordelia Schmid
Hervé Jégou
MIACV
78
369
0
29 Aug 2019
Does Learning Require Memorization? A Short Tale about a Long Tail
Vitaly Feldman
TDI
142
502
0
12 Jun 2019
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
Chen Zhu
Wenjie Huang
Ali Shafahi
Hengduo Li
Gavin Taylor
Christoph Studer
Tom Goldstein
96
285
0
15 May 2019
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
309
1,060
0
29 Nov 2018
Spectral Signatures in Backdoor Attacks
Brandon Tran
Jerry Li
Aleksander Madry
AAML
93
797
0
01 Nov 2018
Machine Learning with Membership Privacy using Adversarial Regularization
Milad Nasr
Reza Shokri
Amir Houmansadr
FedML
MIACV
60
475
0
16 Jul 2018
How To Backdoor Federated Learning
Eugene Bagdasaryan
Andreas Veit
Yiqing Hua
D. Estrin
Vitaly Shmatikov
SILM
FedML
104
1,931
0
02 Jul 2018
Exploiting Unintended Feature Leakage in Collaborative Learning
Luca Melis
Congzheng Song
Emiliano De Cristofaro
Vitaly Shmatikov
FedML
159
1,482
0
10 May 2018
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Ali Shafahi
Wenjie Huang
Mahyar Najibi
Octavian Suciu
Christoph Studer
Tudor Dumitras
Tom Goldstein
AAML
89
1,097
0
03 Apr 2018
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
Matthew Jagielski
Alina Oprea
Battista Biggio
Chang-rui Liu
Cristina Nita-Rotaru
Yue Liu
AAML
89
764
0
01 Apr 2018
Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
Octavian Suciu
R. Marginean
Yigitcan Kaya
Hal Daumé
Tudor Dumitras
AAML
83
287
0
19 Mar 2018
Sever: A Robust Meta-Algorithm for Stochastic Optimization
Ilias Diakonikolas
Gautam Kamath
D. Kane
Jerry Li
Jacob Steinhardt
Alistair Stewart
81
289
0
07 Mar 2018
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
Nicholas Carlini
Chang-rui Liu
Ulfar Erlingsson
Jernej Kos
Basel Alomair
165
1,150
0
22 Feb 2018
Neural Trojans
Yuntao Liu
Yang Xie
Ankur Srivastava
AAML
65
357
0
03 Oct 2017
Machine Learning Models that Remember Too Much
Congzheng Song
Thomas Ristenpart
Vitaly Shmatikov
VLM
75
519
0
22 Sep 2017
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
AAML
112
633
0
29 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
319
12,151
0
19 Jun 2017
Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning
Briland Hitaj
G. Ateniese
Fernando Perez-Cruz
FedML
133
1,413
0
24 Feb 2017
Learning from Untrusted Data
Moses Charikar
Jacob Steinhardt
Gregory Valiant
FedML
OOD
127
300
0
07 Nov 2016
1
2
Next