ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.04542
  4. Cited By
SoK: Unintended Interactions among Machine Learning Defenses and Risks

SoK: Unintended Interactions among Machine Learning Defenses and Risks

7 December 2023
Vasisht Duddu
S. Szyller
Nadarajah Asokan
    AAML
ArXivPDFHTML

Papers citing "SoK: Unintended Interactions among Machine Learning Defenses and Risks"

43 / 93 papers shown
Title
What Neural Networks Memorize and Why: Discovering the Long Tail via
  Influence Estimation
What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation
Vitaly Feldman
Chiyuan Zhang
TDI
140
464
0
09 Aug 2020
Label-Only Membership Inference Attacks
Label-Only Membership Inference Attacks
Christopher A. Choquette-Choo
Florian Tramèr
Nicholas Carlini
Nicolas Papernot
MIACV
MIALM
90
511
0
28 Jul 2020
Subpopulation Data Poisoning Attacks
Subpopulation Data Poisoning Attacks
Matthew Jagielski
Giorgio Severi
Niklas Pousette Harger
Alina Oprea
AAML
SILM
55
119
0
24 Jun 2020
Domain Generalization using Causal Matching
Domain Generalization using Causal Matching
Divyat Mahajan
Shruti Tople
Amit Sharma
OOD
87
335
0
12 Jun 2020
An Overview of Privacy in Machine Learning
An Overview of Privacy in Machine Learning
Emiliano De Cristofaro
SILM
59
84
0
18 May 2020
Inverting Gradients -- How easy is it to break privacy in federated
  learning?
Inverting Gradients -- How easy is it to break privacy in federated learning?
Jonas Geiping
Hartmut Bauermeister
Hannah Dröge
Michael Moeller
FedML
100
1,228
0
31 Mar 2020
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient
  Shaping
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
Sanghyun Hong
Varun Chandrasekaran
Yigitcan Kaya
Tudor Dumitras
Nicolas Papernot
AAML
82
136
0
26 Feb 2020
Rethinking Bias-Variance Trade-off for Generalization of Neural Networks
Rethinking Bias-Variance Trade-off for Generalization of Neural Networks
Zitong Yang
Yaodong Yu
Chong You
Jacob Steinhardt
Yi-An Ma
67
185
0
26 Feb 2020
Understanding and Mitigating the Tradeoff Between Robustness and
  Accuracy
Understanding and Mitigating the Tradeoff Between Robustness and Accuracy
Aditi Raghunathan
Sang Michael Xie
Fanny Yang
John C. Duchi
Percy Liang
AAML
84
228
0
25 Feb 2020
Estimating Training Data Influence by Tracing Gradient Descent
Estimating Training Data Influence by Tracing Gradient Descent
G. Pruthi
Frederick Liu
Mukund Sundararajan
Satyen Kale
TDI
81
408
0
19 Feb 2020
Radioactive data: tracing through training
Radioactive data: tracing through training
Alexandre Sablayrolles
Matthijs Douze
Cordelia Schmid
Hervé Jégou
74
75
0
03 Feb 2020
Privacy for All: Demystify Vulnerability Disparity of Differential
  Privacy against Membership Inference Attack
Privacy for All: Demystify Vulnerability Disparity of Differential Privacy against Membership Inference Attack
Bo Zhang
Ruotong Yu
Haipei Sun
Yanying Li
Jun Xu
Wendy Hui Wang
AAML
45
13
0
24 Jan 2020
Characterizing the Decision Boundary of Deep Neural Networks
Characterizing the Decision Boundary of Deep Neural Networks
Hamid Karimi
Hanyu Wang
Jiliang Tang
46
67
0
24 Dec 2019
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Nils Lukas
Yuxuan Zhang
Florian Kerschbaum
MLAU
FedML
AAML
64
145
0
02 Dec 2019
The Secret Revealer: Generative Model-Inversion Attacks Against Deep
  Neural Networks
The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks
Yuheng Zhang
R. Jia
Hengzhi Pei
Wenxiao Wang
Yue Liu
D. Song
AAML
111
419
0
17 Nov 2019
Thieves on Sesame Street! Model Extraction of BERT-based APIs
Thieves on Sesame Street! Model Extraction of BERT-based APIs
Kalpesh Krishna
Gaurav Singh Tomar
Ankur P. Parikh
Nicolas Papernot
Mohit Iyyer
MIACV
MLAU
107
201
0
27 Oct 2019
Explanations can be manipulated and geometry is to blame
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAML
FAtt
81
331
0
19 Jun 2019
Differential Privacy Has Disparate Impact on Model Accuracy
Differential Privacy Has Disparate Impact on Model Accuracy
Eugene Bagdasaryan
Vitaly Shmatikov
142
481
0
28 May 2019
Adversarial Examples Are Not Bugs, They Are Features
Adversarial Examples Are Not Bugs, They Are Features
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Logan Engstrom
Brandon Tran
Aleksander Madry
SILM
89
1,839
0
06 May 2019
HopSkipJumpAttack: A Query-Efficient Decision-Based Attack
HopSkipJumpAttack: A Query-Efficient Decision-Based Attack
Jianbo Chen
Michael I. Jordan
Martin J. Wainwright
AAML
63
667
0
03 Apr 2019
Fooling Neural Network Interpretations via Adversarial Model
  Manipulation
Fooling Neural Network Interpretations via Adversarial Model Manipulation
Juyeon Heo
Sunghwan Joo
Taesup Moon
AAML
FAtt
93
203
0
06 Feb 2019
Reconciling modern machine learning practice and the bias-variance
  trade-off
Reconciling modern machine learning practice and the bias-variance trade-off
M. Belkin
Daniel J. Hsu
Siyuan Ma
Soumik Mandal
227
1,647
0
28 Dec 2018
Robustness May Be at Odds with Accuracy
Robustness May Be at Odds with Accuracy
Dimitris Tsipras
Shibani Santurkar
Logan Engstrom
Alexander Turner
Aleksander Madry
AAML
102
1,778
0
30 May 2018
Exploiting Unintended Feature Leakage in Collaborative Learning
Exploiting Unintended Feature Leakage in Collaborative Learning
Luca Melis
Congzheng Song
Emiliano De Cristofaro
Vitaly Shmatikov
FedML
145
1,474
0
10 May 2018
A Reductions Approach to Fair Classification
A Reductions Approach to Fair Classification
Alekh Agarwal
A. Beygelzimer
Miroslav Dudík
John Langford
Hanna M. Wallach
FaML
224
1,100
0
06 Mar 2018
Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks
  by Backdooring
Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring
Yossi Adi
Carsten Baum
Moustapha Cissé
Benny Pinkas
Joseph Keshet
61
677
0
13 Feb 2018
Certified Robustness to Adversarial Examples with Differential Privacy
Certified Robustness to Adversarial Examples with Differential Privacy
Mathias Lécuyer
Vaggelis Atlidakis
Roxana Geambasu
Daniel J. Hsu
Suman Jana
SILM
AAML
94
934
0
09 Feb 2018
A Survey Of Methods For Explaining Black Box Models
A Survey Of Methods For Explaining Black Box Models
Riccardo Guidotti
A. Monreale
Salvatore Ruggieri
Franco Turini
D. Pedreschi
F. Giannotti
XAI
124
3,957
0
06 Feb 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
214
1,842
0
30 Nov 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
304
12,063
0
19 Jun 2017
A Closer Look at Memorization in Deep Networks
A Closer Look at Memorization in Deep Networks
Devansh Arpit
Stanislaw Jastrzebski
Nicolas Ballas
David M. Krueger
Emmanuel Bengio
...
Tegan Maharaj
Asja Fischer
Aaron Courville
Yoshua Bengio
Simon Lacoste-Julien
TDI
122
1,818
0
16 Jun 2017
Learning Important Features Through Propagating Activation Differences
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
198
3,871
0
10 Apr 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
203
2,885
0
14 Mar 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
188
5,986
0
04 Mar 2017
On the (Statistical) Detection of Adversarial Examples
On the (Statistical) Detection of Adversarial Examples
Kathrin Grosse
Praveen Manoharan
Nicolas Papernot
Michael Backes
Patrick McDaniel
AAML
76
713
0
21 Feb 2017
Understanding deep learning requires rethinking generalization
Understanding deep learning requires rethinking generalization
Chiyuan Zhang
Samy Bengio
Moritz Hardt
Benjamin Recht
Oriol Vinyals
HAI
339
4,626
0
10 Nov 2016
Membership Inference Attacks against Machine Learning Models
Membership Inference Attacks against Machine Learning Models
Reza Shokri
M. Stronati
Congzheng Song
Vitaly Shmatikov
SLR
MIALM
MIACV
249
4,122
0
18 Oct 2016
Equality of Opportunity in Supervised Learning
Equality of Opportunity in Supervised Learning
Moritz Hardt
Eric Price
Nathan Srebro
FaML
222
4,307
0
07 Oct 2016
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
297
20,003
0
07 Oct 2016
Stealing Machine Learning Models via Prediction APIs
Stealing Machine Learning Models via Prediction APIs
Florian Tramèr
Fan Zhang
Ari Juels
Michael K. Reiter
Thomas Ristenpart
SILM
MLAU
107
1,805
0
09 Sep 2016
Deep Learning with Differential Privacy
Deep Learning with Differential Privacy
Martín Abadi
Andy Chu
Ian Goodfellow
H. B. McMahan
Ilya Mironov
Kunal Talwar
Li Zhang
FedML
SyDa
203
6,121
0
01 Jul 2016
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
277
19,049
0
20 Dec 2014
Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data
  from Machine Learning Classifiers
Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers
G. Ateniese
G. Felici
L. Mancini
A. Spognardi
Antonio Villani
Domenico Vitali
75
460
0
19 Jun 2013
Previous
12