ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.17092
  4. Cited By
Covert Attacks on Machine Learning Training in Passively Secure MPC

Covert Attacks on Machine Learning Training in Passively Secure MPC

21 May 2025
Matthew Jagielski
Daniel Escudero
Rahul Rachuri
Peter Scholl
ArXivPDFHTML

Papers citing "Covert Attacks on Machine Learning Training in Passively Secure MPC"

35 / 35 papers shown
Title
Bounding data reconstruction attacks with the hypothesis testing
  interpretation of differential privacy
Bounding data reconstruction attacks with the hypothesis testing interpretation of differential privacy
Georgios Kaissis
Jamie Hayes
Alexander Ziller
Daniel Rueckert
AAML
48
12
0
08 Jul 2023
A Note On Interpreting Canary Exposure
A Note On Interpreting Canary Exposure
Matthew Jagielski
47
5
0
31 May 2023
To Be Forgotten or To Be Fair: Unveiling Fairness Implications of
  Machine Unlearning Methods
To Be Forgotten or To Be Fair: Unveiling Fairness Implications of Machine Unlearning Methods
Dawen Zhang
Shidong Pan
Thong Hoang
Zhenchang Xing
Mark Staples
Xiwei Xu
Lina Yao
Qinghua Lu
Liming Zhu
MU
32
18
0
07 Feb 2023
Editing Models with Task Arithmetic
Editing Models with Task Arithmetic
Gabriel Ilharco
Marco Tulio Ribeiro
Mitchell Wortsman
Suchin Gururangan
Ludwig Schmidt
Hannaneh Hajishirzi
Ali Farhadi
KELM
MoMe
MU
97
462
0
08 Dec 2022
Amplifying Membership Exposure via Data Poisoning
Amplifying Membership Exposure via Data Poisoning
Yufei Chen
Chao Shen
Yun Shen
Cong Wang
Yang Zhang
AAML
62
28
0
01 Nov 2022
The Privacy Onion Effect: Memorization is Relative
The Privacy Onion Effect: Memorization is Relative
Nicholas Carlini
Matthew Jagielski
Chiyuan Zhang
Nicolas Papernot
Andreas Terzis
Florian Tramèr
PILM
MIACV
83
104
0
21 Jun 2022
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Florian Tramèr
Reza Shokri
Ayrton San Joaquin
Hoang Minh Le
Matthew Jagielski
Sanghyun Hong
Nicholas Carlini
MIACV
68
112
0
31 Mar 2022
Reconstructing Training Data with Informed Adversaries
Reconstructing Training Data with Informed Adversaries
Borja Balle
Giovanni Cherubin
Jamie Hayes
MIACV
AAML
62
162
0
13 Jan 2022
Membership Inference Attacks From First Principles
Membership Inference Attacks From First Principles
Nicholas Carlini
Steve Chien
Milad Nasr
Shuang Song
Andreas Terzis
Florian Tramèr
MIACV
MIALM
38
663
0
07 Dec 2021
When the Curious Abandon Honesty: Federated Learning Is Not Private
When the Curious Abandon Honesty: Federated Learning Is Not Private
Franziska Boenisch
Adam Dziedzic
R. Schuster
Ali Shahin Shamsabadi
Ilia Shumailov
Nicolas Papernot
FedML
AAML
83
183
0
06 Dec 2021
Enhanced Membership Inference Attacks against Machine Learning Models
Enhanced Membership Inference Attacks against Machine Learning Models
Jiayuan Ye
Aadyaa Maddi
S. K. Murakonda
Vincent Bindschaedler
Reza Shokri
MIALM
MIACV
37
242
0
18 Nov 2021
CrypTen: Secure Multi-Party Computation Meets Machine Learning
CrypTen: Secure Multi-Party Computation Meets Machine Learning
Brian Knott
Shobha Venkataraman
Awni Y. Hannun
Shubho Sengupta
Mark Ibrahim
Laurens van der Maaten
29
352
0
02 Sep 2021
Secure Quantized Training for Deep Learning
Secure Quantized Training for Deep Learning
Marcel Keller
Ke Sun
MQ
40
66
0
01 Jul 2021
Handcrafted Backdoors in Deep Neural Networks
Handcrafted Backdoors in Deep Neural Networks
Sanghyun Hong
Nicholas Carlini
Alexey Kurakin
63
74
0
08 Jun 2021
CryptGPU: Fast Privacy-Preserving Machine Learning on the GPU
CryptGPU: Fast Privacy-Preserving Machine Learning on the GPU
Sijun Tan
Brian Knott
Yuan Tian
David J. Wu
BDL
FedML
61
187
0
22 Apr 2021
Adversary Instantiation: Lower Bounds for Differentially Private Machine
  Learning
Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning
Milad Nasr
Shuang Song
Abhradeep Thakurta
Nicolas Papernot
Nicholas Carlini
MIACV
FedML
94
219
0
11 Jan 2021
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
364
1,868
0
14 Dec 2020
Subpopulation Data Poisoning Attacks
Subpopulation Data Poisoning Attacks
Matthew Jagielski
Giorgio Severi
Niklas Pousette Harger
Alina Oprea
AAML
SILM
34
117
0
24 Jun 2020
On Adversarial Bias and the Robustness of Fair Machine Learning
On Adversarial Bias and the Robustness of Fair Machine Learning
Hong Chang
Ta Duy Nguyen
S. K. Murakonda
Ehsan Kazemi
Reza Shokri
FaML
OOD
FedML
21
51
0
15 Jun 2020
Auditing Differentially Private Machine Learning: How Private is Private
  SGD?
Auditing Differentially Private Machine Learning: How Private is Private SGD?
Matthew Jagielski
Jonathan R. Ullman
Alina Oprea
FedML
31
240
0
13 Jun 2020
Poisoning Attacks on Algorithmic Fairness
Poisoning Attacks on Algorithmic Fairness
David Solans
Battista Biggio
Carlos Castillo
AAML
15
81
0
15 Apr 2020
FALCON: Honest-Majority Maliciously Secure Framework for Private Deep
  Learning
FALCON: Honest-Majority Maliciously Secure Framework for Private Deep Learning
Sameer Wagh
Shruti Tople
Fabrice Benhamouda
E. Kushilevitz
Prateek Mittal
T. Rabin
FedML
43
297
0
05 Apr 2020
Reverse-Engineering Deep ReLU Networks
Reverse-Engineering Deep ReLU Networks
David Rolnick
Konrad Paul Kording
11
103
0
02 Oct 2019
CrypTFlow: Secure TensorFlow Inference
CrypTFlow: Secure TensorFlow Inference
Nishant Kumar
Mayank Rathee
Nishanth Chandran
Divya Gupta
Aseem Rastogi
Rahul Sharma
105
238
0
16 Sep 2019
High Accuracy and High Fidelity Extraction of Neural Networks
High Accuracy and High Fidelity Extraction of Neural Networks
Matthew Jagielski
Nicholas Carlini
David Berthelot
Alexey Kurakin
Nicolas Papernot
MLAU
MIACV
53
377
0
03 Sep 2019
Analyzing Federated Learning through an Adversarial Lens
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
239
1,044
0
29 Nov 2018
Private Machine Learning in TensorFlow using Secure Computation
Private Machine Learning in TensorFlow using Secure Computation
Morten Dahl
Jason V. Mancuso
Yann Dupis
Ben Decoste
Morgan Giraud
Ian Livingstone
Justin Patriquin
Gavin Uhma
FedML
21
76
0
18 Oct 2018
How To Backdoor Federated Learning
How To Backdoor Federated Learning
Eugene Bagdasaryan
Andreas Veit
Yiqing Hua
D. Estrin
Vitaly Shmatikov
SILM
FedML
50
1,892
0
02 Jul 2018
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Ali Shafahi
Wenjie Huang
Mahyar Najibi
Octavian Suciu
Christoph Studer
Tudor Dumitras
Tom Goldstein
AAML
56
1,080
0
03 Apr 2018
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for
  Regression Learning
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
Matthew Jagielski
Alina Oprea
Battista Biggio
Chang-rui Liu
Cristina Nita-Rotaru
Yue Liu
AAML
42
757
0
01 Apr 2018
Technical Report: When Does Machine Learning FAIL? Generalized
  Transferability for Evasion and Poisoning Attacks
Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
Octavian Suciu
R. Marginean
Yigitcan Kaya
Hal Daumé
Tudor Dumitras
AAML
55
287
0
19 Mar 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
D. Song
AAML
SILM
62
1,822
0
15 Dec 2017
BadNets: Identifying Vulnerabilities in the Machine Learning Model
  Supply Chain
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
SILM
54
1,754
0
22 Aug 2017
Membership Inference Attacks against Machine Learning Models
Membership Inference Attacks against Machine Learning Models
Reza Shokri
M. Stronati
Congzheng Song
Vitaly Shmatikov
SLR
MIALM
MIACV
191
4,075
0
18 Oct 2016
Poisoning Attacks against Support Vector Machines
Poisoning Attacks against Support Vector Machines
Battista Biggio
B. Nelson
Pavel Laskov
AAML
48
1,580
0
27 Jun 2012
1