ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.04542
  4. Cited By
SoK: Unintended Interactions among Machine Learning Defenses and Risks

SoK: Unintended Interactions among Machine Learning Defenses and Risks

7 December 2023
Vasisht Duddu
S. Szyller
Nadarajah Asokan
    AAML
ArXivPDFHTML

Papers citing "SoK: Unintended Interactions among Machine Learning Defenses and Risks"

50 / 93 papers shown
Title
Purifier: Defending Data Inference Attacks via Transforming Confidence
  Scores
Purifier: Defending Data Inference Attacks via Transforming Confidence Scores
Ziqi Yang
Li-Juan Wang
D. Yang
Jie Wan
Ziming Zhao
E. Chang
Fan Zhang
Kui Ren
AAML
51
15
0
01 Dec 2022
Fairness Increases Adversarial Vulnerability
Fairness Increases Adversarial Vulnerability
Cuong Tran
Keyu Zhu
Ferdinando Fioretto
Pascal Van Hentenryck
48
6
0
21 Nov 2022
Differentially Private Optimizers Can Learn Adversarially Robust Models
Differentially Private Optimizers Can Learn Adversarially Robust Models
Yuan Zhang
Zhiqi Bu
51
3
0
16 Nov 2022
Membership Inference Attacks and Generalization: A Causal Perspective
Membership Inference Attacks and Generalization: A Causal Perspective
Teodora Baluta
Shiqi Shen
S. Hitarth
Shruti Tople
Prateek Saxena
OOD
MIACV
65
19
0
18 Sep 2022
Distribution inference risks: Identifying and mitigating sources of
  leakage
Distribution inference risks: Identifying and mitigating sources of leakage
Valentin Hartmann
Léo Meynent
Maxime Peyrard
Dimitrios Dimitriadis
Shruti Tople
Robert West
MIACV
37
14
0
18 Sep 2022
Survey on Fairness Notions and Related Tensions
Survey on Fairness Notions and Related Tensions
Guilherme Alves
Fabien Bernier
Miguel Couceiro
K. Makhlouf
C. Palamidessi
Sami Zhioua
FaML
75
26
0
16 Sep 2022
Model Inversion Attacks against Graph Neural Networks
Model Inversion Attacks against Graph Neural Networks
Zaixin Zhang
Qi Liu
Zhenya Huang
Hao Wang
Cheekong Lee
Enhong
AAML
61
36
0
16 Sep 2022
Are Attribute Inference Attacks Just Imputation?
Are Attribute Inference Attacks Just Imputation?
Bargav Jayaraman
David Evans
TDI
MIACV
68
49
0
02 Sep 2022
Inferring Sensitive Attributes from Model Explanations
Inferring Sensitive Attributes from Model Explanations
Vasisht Duddu
A. Boutet
MIACV
SILM
67
17
0
21 Aug 2022
Careful What You Wish For: on the Extraction of Adversarially Trained
  Models
Careful What You Wish For: on the Extraction of Adversarially Trained Models
Kacem Khaled
Gabriela Nicolescu
F. Magalhães
MIACV
AAML
45
4
0
21 Jul 2022
Protecting Global Properties of Datasets with Distribution Privacy
  Mechanisms
Protecting Global Properties of Datasets with Distribution Privacy Mechanisms
Michelle Chen
O. Ohrimenko
FedML
41
12
0
18 Jul 2022
The Privacy Onion Effect: Memorization is Relative
The Privacy Onion Effect: Memorization is Relative
Nicholas Carlini
Matthew Jagielski
Chiyuan Zhang
Nicolas Papernot
Andreas Terzis
Florian Tramèr
PILM
MIACV
114
109
0
21 Jun 2022
On the Role of Generalization in Transferability of Adversarial Examples
On the Role of Generalization in Transferability of Adversarial Examples
Yilin Wang
Farzan Farnia
AAML
50
11
0
18 Jun 2022
DualCF: Efficient Model Extraction Attack from Counterfactual
  Explanations
DualCF: Efficient Model Extraction Attack from Counterfactual Explanations
Yongjie Wang
Hangwei Qian
Chunyan Miao
AAML
40
31
0
13 May 2022
The Road to Explainability is Paved with Bias: Measuring the Fairness of
  Explanations
The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations
Aparna Balagopalan
Haoran Zhang
Kimia Hamidieh
Thomas Hartvigsen
Frank Rudzicz
Marzyeh Ghassemi
69
78
0
06 May 2022
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Florian Tramèr
Reza Shokri
Ayrton San Joaquin
Hoang Minh Le
Matthew Jagielski
Sanghyun Hong
Nicholas Carlini
MIACV
94
120
0
31 Mar 2022
Fingerprinting Deep Neural Networks Globally via Universal Adversarial
  Perturbations
Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations
Zirui Peng
Shaofeng Li
Guoxing Chen
Cheng Zhang
Haojin Zhu
Minhui Xue
AAML
FedML
55
68
0
17 Feb 2022
Quantifying Memorization Across Neural Language Models
Quantifying Memorization Across Neural Language Models
Nicholas Carlini
Daphne Ippolito
Matthew Jagielski
Katherine Lee
Florian Tramèr
Chiyuan Zhang
PILM
121
620
0
15 Feb 2022
Defending against Reconstruction Attacks with Rényi Differential
  Privacy
Defending against Reconstruction Attacks with Rényi Differential Privacy
Pierre Stock
I. Shilov
Ilya Mironov
Alexandre Sablayrolles
AAML
SILM
MIACV
63
40
0
15 Feb 2022
Fishing for User Data in Large-Batch Federated Learning via Gradient
  Magnification
Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification
Yuxin Wen
Jonas Geiping
Liam H. Fowl
Micah Goldblum
Tom Goldstein
FedML
180
96
0
01 Feb 2022
Understanding Why Generalized Reweighting Does Not Improve Over ERM
Understanding Why Generalized Reweighting Does Not Improve Over ERM
Runtian Zhai
Chen Dan
Zico Kolter
Pradeep Ravikumar
OOD
56
27
0
28 Jan 2022
Are Your Sensitive Attributes Private? Novel Model Inversion Attribute
  Inference Attacks on Classification Models
Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models
Shagufta Mehnaz
S. V. Dibbo
Ehsanul Kabir
Ninghui Li
E. Bertino
MIACV
65
62
0
23 Jan 2022
Counterfactual Memorization in Neural Language Models
Counterfactual Memorization in Neural Language Models
Chiyuan Zhang
Daphne Ippolito
Katherine Lee
Matthew Jagielski
Florian Tramèr
Nicholas Carlini
71
134
0
24 Dec 2021
Towards Understanding the Impact of Model Size on Differential Private
  Classification
Towards Understanding the Impact of Model Size on Differential Private Classification
Yinchen Shen
Zhiguo Wang
Ruoyu Sun
Xiaojing Shen
62
10
0
27 Nov 2021
A Fairness Analysis on Private Aggregation of Teacher Ensembles
A Fairness Analysis on Private Aggregation of Teacher Ensembles
Cuong Tran
M. H. Dinh
Kyle Beiter
Ferdinando Fioretto
50
12
0
17 Sep 2021
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of
  Overparameterized Machine Learning
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning
Yehuda Dar
Vidya Muthukumar
Richard G. Baraniuk
77
72
0
06 Sep 2021
LinkTeller: Recovering Private Edges from Graph Neural Networks via
  Influence Analysis
LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis
Fan Wu
Yunhui Long
Ce Zhang
Yue Liu
AAML
69
97
0
14 Aug 2021
Deduplicating Training Data Makes Language Models Better
Deduplicating Training Data Makes Language Models Better
Katherine Lee
Daphne Ippolito
A. Nystrom
Chiyuan Zhang
Douglas Eck
Chris Callison-Burch
Nicholas Carlini
SyDa
360
628
0
14 Jul 2021
Accuracy, Interpretability, and Differential Privacy via Explainable
  Boosting
Accuracy, Interpretability, and Differential Privacy via Explainable Boosting
Harsha Nori
R. Caruana
Zhiqi Bu
J. Shen
Janardhan Kulkarni
65
38
0
17 Jun 2021
CRFL: Certifiably Robust Federated Learning against Backdoor Attacks
CRFL: Certifiably Robust Federated Learning against Backdoor Attacks
Chulin Xie
Minghao Chen
Pin-Yu Chen
Yue Liu
FedML
75
169
0
15 Jun 2021
Differentially Empirical Risk Minimization under the Fairness Lens
Differentially Empirical Risk Minimization under the Fairness Lens
Cuong Tran
My H. Dinh
Ferdinando Fioretto
35
46
0
04 Jun 2021
Counterfactual Explanations Can Be Manipulated
Counterfactual Explanations Can Be Manipulated
Dylan Slack
Sophie Hilgard
Himabindu Lakkaraju
Sameer Singh
64
137
0
04 Jun 2021
Fooling Partial Dependence via Data Poisoning
Fooling Partial Dependence via Data Poisoning
Hubert Baniecki
Wojciech Kretowicz
P. Biecek
AAML
47
23
0
26 May 2021
Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be
  Secretly Coded into the Classifiers' Outputs
Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs
Mohammad Malekzadeh
Anastasia Borovykh
Deniz Gündüz
MIACV
64
42
0
25 May 2021
Exploiting Explanations for Model Inversion Attacks
Exploiting Explanations for Model Inversion Attacks
Xu Zhao
Wencan Zhang
Xiao Xiao
Brian Y. Lim
MIACV
64
83
0
26 Apr 2021
Membership Inference Attacks on Machine Learning: A Survey
Membership Inference Attacks on Machine Learning: A Survey
Hongsheng Hu
Z. Salcic
Lichao Sun
Gillian Dobbie
Philip S. Yu
Xuyun Zhang
MIACV
114
435
0
14 Mar 2021
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
  Learning Models
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
Yugeng Liu
Rui Wen
Xinlei He
A. Salem
Zhikun Zhang
Michael Backes
Emiliano De Cristofaro
Mario Fritz
Yang Zhang
AAML
62
132
0
04 Feb 2021
Property Inference From Poisoning
Property Inference From Poisoning
Melissa Chase
Esha Ghosh
Saeed Mahloujifar
MIACV
73
80
0
26 Jan 2021
Exacerbating Algorithmic Bias through Fairness Attacks
Exacerbating Algorithmic Bias through Fairness Attacks
Ninareh Mehrabi
Muhammad Naveed
Fred Morstatter
Aram Galstyan
AAML
65
67
0
16 Dec 2020
Robustness Threats of Differential Privacy
Robustness Threats of Differential Privacy
Nurislam Tursynbek
Aleksandr Petiushko
Ivan Oseledets
AAML
70
14
0
14 Dec 2020
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
489
1,917
0
14 Dec 2020
When is Memorization of Irrelevant Training Data Necessary for
  High-Accuracy Learning?
When is Memorization of Irrelevant Training Data Necessary for High-Accuracy Learning?
Gavin Brown
Mark Bun
Vitaly Feldman
Adam D. Smith
Kunal Talwar
292
99
0
11 Dec 2020
FairOD: Fairness-aware Outlier Detection
FairOD: Fairness-aware Outlier Detection
Shubhranshu Shekhar
Neil Shah
Leman Akoglu
66
37
0
05 Dec 2020
Empirical observation of negligible fairness-accuracy trade-offs in
  machine learning for public policy
Empirical observation of negligible fairness-accuracy trade-offs in machine learning for public policy
Kit T. Rodolfa
Hemank Lamba
Rayid Ghani
77
90
0
05 Dec 2020
On the Privacy Risks of Algorithmic Fairness
On the Privacy Risks of Algorithmic Fairness
Hong Chang
Reza Shokri
FaML
172
112
0
07 Nov 2020
Robustness May Be at Odds with Fairness: An Empirical Study on
  Class-wise Accuracy
Robustness May Be at Odds with Fairness: An Empirical Study on Class-wise Accuracy
Philipp Benz
Chaoning Zhang
Adil Karjauv
In So Kweon
AAML
59
59
0
26 Oct 2020
RobustBench: a standardized adversarial robustness benchmark
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
320
702
0
19 Oct 2020
To be Robust or to be Fair: Towards Fairness in Adversarial Training
To be Robust or to be Fair: Towards Fairness in Adversarial Training
Han Xu
Xiaorui Liu
Yaxin Li
Anil K. Jain
Jiliang Tang
46
180
0
13 Oct 2020
GECKO: Reconciling Privacy, Accuracy and Efficiency in Embedded Deep
  Learning
GECKO: Reconciling Privacy, Accuracy and Efficiency in Embedded Deep Learning
Vasisht Duddu
A. Boutet
Virat Shejwalkar
GNN
39
4
0
02 Oct 2020
Model extraction from counterfactual explanations
Model extraction from counterfactual explanations
Ulrich Aïvodji
Alexandre Bolot
Sébastien Gambs
MIACV
MLAU
60
51
0
03 Sep 2020
12
Next