ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1611.03814
  4. Cited By
Towards the Science of Security and Privacy in Machine Learning

Towards the Science of Security and Privacy in Machine Learning

11 November 2016
Nicolas Papernot
Patrick McDaniel
Arunesh Sinha
Michael P. Wellman
    AAML
ArXivPDFHTML

Papers citing "Towards the Science of Security and Privacy in Machine Learning"

50 / 88 papers shown
Title
VeriSplit: Secure and Practical Offloading of Machine Learning Inferences across IoT Devices
VeriSplit: Secure and Practical Offloading of Machine Learning Inferences across IoT Devices
Han Zhang
Zifan Wang
Mihir Dhamankar
Matt Fredrikson
Yuvraj Agarwal
49
2
0
02 Jun 2024
Data Reconstruction Attacks and Defenses: A Systematic Evaluation
Data Reconstruction Attacks and Defenses: A Systematic Evaluation
Sheng Liu
Zihan Wang
Yuxiao Chen
Qi Lei
AAML
MIACV
61
4
0
13 Feb 2024
A Survey of Graph Unlearning
A Survey of Graph Unlearning
Anwar Said
Tyler Derr
Mudassir Shabbir
W. Abbas
X. Koutsoukos
MU
33
7
0
23 Aug 2023
Runtime Stealthy Perception Attacks against DNN-based Adaptive Cruise Control Systems
Runtime Stealthy Perception Attacks against DNN-based Adaptive Cruise Control Systems
Xugui Zhou
Anqi Chen
Maxfield Kouzel
Haotian Ren
Morgan McCarty
Cristina Nita-Rotaru
H. Alemzadeh
AAML
31
2
0
18 Jul 2023
Data Minimization at Inference Time
Data Minimization at Inference Time
Cuong Tran
Ferdinando Fioretto
33
1
0
27 May 2023
Patchwork Learning: A Paradigm Towards Integrative Analysis across
  Diverse Biomedical Data Sources
Patchwork Learning: A Paradigm Towards Integrative Analysis across Diverse Biomedical Data Sources
Suraj Rajendran
Weishen Pan
M. Sabuncu
Yong Chen
Jiayu Zhou
Fei Wang
59
14
0
10 May 2023
Federated Learning of Shareable Bases for Personalization-Friendly Image
  Classification
Federated Learning of Shareable Bases for Personalization-Friendly Image Classification
Hong-You Chen
Shitian Zhao
Ruotong Wang
Xuhui Jia
Qi
Boqing Gong
Wei-Lun Chao
Li Zhang
FedML
30
7
0
16 Apr 2023
Personalized Privacy Auditing and Optimization at Test Time
Personalized Privacy Auditing and Optimization at Test Time
Cuong Tran
Ferdinando Fioretto
11
2
0
31 Jan 2023
Backdoor Attacks Against Dataset Distillation
Backdoor Attacks Against Dataset Distillation
Yugeng Liu
Zheng Li
Michael Backes
Yun Shen
Yang Zhang
DD
42
28
0
03 Jan 2023
Federated Learning for Healthcare Domain - Pipeline, Applications and
  Challenges
Federated Learning for Healthcare Domain - Pipeline, Applications and Challenges
Madhura Joshi
Ankit Pal
Malaikannan Sankarasubbu
OOD
AI4CE
FedML
25
93
0
15 Nov 2022
Adversarial Robustness for Tabular Data through Cost and Utility
  Awareness
Adversarial Robustness for Tabular Data through Cost and Utility Awareness
Klim Kireev
B. Kulynych
Carmela Troncoso
AAML
26
16
0
27 Aug 2022
Membership-Doctor: Comprehensive Assessment of Membership Inference
  Against Machine Learning Models
Membership-Doctor: Comprehensive Assessment of Membership Inference Against Machine Learning Models
Xinlei He
Zheng Li
Weilin Xu
Cory Cornelius
Yang Zhang
MIACV
38
24
0
22 Aug 2022
Lifelong DP: Consistently Bounded Differential Privacy in Lifelong
  Machine Learning
Lifelong DP: Consistently Bounded Differential Privacy in Lifelong Machine Learning
Phung Lai
Han Hu
Nhathai Phan
Ruoming Jin
My T. Thai
An M. Chen
25
2
0
26 Jul 2022
Careful What You Wish For: on the Extraction of Adversarially Trained
  Models
Careful What You Wish For: on the Extraction of Adversarially Trained Models
Kacem Khaled
Gabriela Nicolescu
F. Magalhães
MIACV
AAML
35
4
0
21 Jul 2022
DIMBA: Discretely Masked Black-Box Attack in Single Object Tracking
DIMBA: Discretely Masked Black-Box Attack in Single Object Tracking
Xiangyu Yin
Wenjie Ruan
J. Fieldsend
AAML
38
28
0
17 Jul 2022
Never trust, always verify : a roadmap for Trustworthy AI?
Never trust, always verify : a roadmap for Trustworthy AI?
L. Tidjon
Foutse Khomh
41
15
0
23 Jun 2022
Architectural Backdoors in Neural Networks
Architectural Backdoors in Neural Networks
Mikel Bober-Irizar
Ilia Shumailov
Yiren Zhao
Robert D. Mullins
Nicolas Papernot
AAML
23
23
0
15 Jun 2022
Do You Think You Can Hold Me? The Real Challenge of Problem-Space
  Evasion Attacks
Do You Think You Can Hold Me? The Real Challenge of Problem-Space Evasion Attacks
Harel Berger
A. Dvir
Chen Hajaj
Rony Ronen
AAML
29
3
0
09 May 2022
Federated Learning with Noisy User Feedback
Federated Learning with Noisy User Feedback
Rahul Sharma
Anil Ramakrishna
Ansel MacLaughlin
Anna Rumshisky
Jimit Majmudar
Clement Chung
Salman Avestimehr
Rahul Gupta
FedML
23
10
0
06 May 2022
Robustness Testing of Data and Knowledge Driven Anomaly Detection in
  Cyber-Physical Systems
Robustness Testing of Data and Knowledge Driven Anomaly Detection in Cyber-Physical Systems
Xugui Zhou
Maxfield Kouzel
H. Alemzadeh
OOD
AAML
8
13
0
20 Apr 2022
Efficient Attribute Unlearning: Towards Selective Removal of Input
  Attributes from Feature Representations
Efficient Attribute Unlearning: Towards Selective Removal of Input Attributes from Feature Representations
Tao Guo
Song Guo
Jiewei Zhang
Wenchao Xu
Junxiao Wang
MU
29
18
0
27 Feb 2022
A Tutorial on Adversarial Learning Attacks and Countermeasures
A Tutorial on Adversarial Learning Attacks and Countermeasures
Cato Pauling
Michael Gimson
Muhammed Qaid
Ahmad Kida
Basel Halak
AAML
25
11
0
21 Feb 2022
Security for Machine Learning-based Software Systems: a survey of
  threats, practices and challenges
Security for Machine Learning-based Software Systems: a survey of threats, practices and challenges
Huaming Chen
Muhammad Ali Babar
AAML
42
22
0
12 Jan 2022
Adversarial Attacks Against Deep Generative Models on Data: A Survey
Adversarial Attacks Against Deep Generative Models on Data: A Survey
Hui Sun
Tianqing Zhu
Zhiqiu Zhang
Dawei Jin
Wanlei Zhou
AAML
47
42
0
01 Dec 2021
Bandwidth Utilization Side-Channel on ML Inference Accelerators
Bandwidth Utilization Side-Channel on ML Inference Accelerators
Sarbartha Banerjee
Shijia Wei
Prakash Ramrakhyani
Mohit Tiwari
23
3
0
14 Oct 2021
Label Noise in Adversarial Training: A Novel Perspective to Study Robust
  Overfitting
Label Noise in Adversarial Training: A Novel Perspective to Study Robust Overfitting
Chengyu Dong
Liyuan Liu
Jingbo Shang
NoLa
AAML
66
18
0
07 Oct 2021
Membership Inference Attacks Against Recommender Systems
Membership Inference Attacks Against Recommender Systems
Minxing Zhang
Z. Ren
Zihan Wang
Pengjie Ren
Zhumin Chen
Pengfei Hu
Yang Zhang
MIACV
AAML
26
83
0
16 Sep 2021
Federated Learning Versus Classical Machine Learning: A Convergence
  Comparison
Federated Learning Versus Classical Machine Learning: A Convergence Comparison
Muhammad Asad
Ahmed Moustafa
Takayuki Ito
FedML
30
42
0
22 Jul 2021
Bad Characters: Imperceptible NLP Attacks
Bad Characters: Imperceptible NLP Attacks
Nicholas Boucher
Ilia Shumailov
Ross J. Anderson
Nicolas Papernot
AAML
SILM
41
103
0
18 Jun 2021
Membership Inference Attacks on Machine Learning: A Survey
Membership Inference Attacks on Machine Learning: A Survey
Hongsheng Hu
Z. Salcic
Lichao Sun
Gillian Dobbie
Philip S. Yu
Xuyun Zhang
MIACV
35
412
0
14 Mar 2021
Proof-of-Learning: Definitions and Practice
Proof-of-Learning: Definitions and Practice
Hengrui Jia
Mohammad Yaghini
Christopher A. Choquette-Choo
Natalie Dullerud
Anvith Thudi
Varun Chandrasekaran
Nicolas Papernot
AAML
25
99
0
09 Mar 2021
Guided Interpolation for Adversarial Training
Guided Interpolation for Adversarial Training
Chen Chen
Jingfeng Zhang
Xilie Xu
Tianlei Hu
Gang Niu
Gang Chen
Masashi Sugiyama
AAML
32
10
0
15 Feb 2021
Resilient Machine Learning for Networked Cyber Physical Systems: A
  Survey for Machine Learning Security to Securing Machine Learning for CPS
Resilient Machine Learning for Networked Cyber Physical Systems: A Survey for Machine Learning Security to Securing Machine Learning for CPS
Felix O. Olowononi
D. Rawat
Chunmei Liu
38
133
0
14 Feb 2021
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Xinlei He
Yang Zhang
21
51
0
08 Feb 2021
Understanding the Interaction of Adversarial Training with Noisy Labels
Understanding the Interaction of Adversarial Training with Noisy Labels
Jianing Zhu
Jingfeng Zhang
Bo Han
Tongliang Liu
Gang Niu
Hongxia Yang
Mohan Kankanhalli
Masashi Sugiyama
AAML
27
27
0
06 Feb 2021
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
  Learning Models
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
Yugeng Liu
Rui Wen
Xinlei He
A. Salem
Zhikun Zhang
Michael Backes
Emiliano De Cristofaro
Mario Fritz
Yang Zhang
AAML
17
125
0
04 Feb 2021
Robust Machine Learning Systems: Challenges, Current Trends,
  Perspectives, and the Road Ahead
Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead
Muhammad Shafique
Mahum Naseer
T. Theocharides
C. Kyrkou
O. Mutlu
Lois Orosa
Jungwook Choi
OOD
81
100
0
04 Jan 2021
A Survey of Machine Learning Techniques in Adversarial Image Forensics
A Survey of Machine Learning Techniques in Adversarial Image Forensics
Ehsan Nowroozi
Ali Dehghantanha
R. Parizi
K. Choo
AAML
25
72
0
19 Oct 2020
Geometry-aware Instance-reweighted Adversarial Training
Geometry-aware Instance-reweighted Adversarial Training
Jingfeng Zhang
Jianing Zhu
Gang Niu
Bo Han
Masashi Sugiyama
Mohan Kankanhalli
AAML
47
269
0
05 Oct 2020
Membership Leakage in Label-Only Exposures
Membership Leakage in Label-Only Exposures
Zheng Li
Yang Zhang
34
237
0
30 Jul 2020
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic
  Speech Recognition and Speaker Identification Systems
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems
H. Abdullah
Kevin Warren
Vincent Bindschaedler
Nicolas Papernot
Patrick Traynor
AAML
32
128
0
13 Jul 2020
Reducing Risk of Model Inversion Using Privacy-Guided Training
Reducing Risk of Model Inversion Using Privacy-Guided Training
Abigail Goldsteen
Gilad Ezov
Ariel Farkash
30
4
0
29 Jun 2020
Neural Anisotropy Directions
Neural Anisotropy Directions
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
34
16
0
17 Jun 2020
Sponge Examples: Energy-Latency Attacks on Neural Networks
Sponge Examples: Energy-Latency Attacks on Neural Networks
Ilia Shumailov
Yiren Zhao
Daniel Bates
Nicolas Papernot
Robert D. Mullins
Ross J. Anderson
SILM
19
127
0
05 Jun 2020
When Machine Unlearning Jeopardizes Privacy
When Machine Unlearning Jeopardizes Privacy
Min Chen
Zhikun Zhang
Tianhao Wang
Michael Backes
Mathias Humbert
Yang Zhang
MIACV
31
218
0
05 May 2020
An Overview of Federated Deep Learning Privacy Attacks and Defensive
  Strategies
An Overview of Federated Deep Learning Privacy Attacks and Defensive Strategies
David Enthoven
Zaid Al-Ars
FedML
60
50
0
01 Apr 2020
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
  Adversarial Robustness
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Ahmadreza Jeddi
M. Shafiee
Michelle Karg
C. Scharfenberger
A. Wong
OOD
AAML
72
63
0
02 Mar 2020
The Curious Case of Adversarially Robust Models: More Data Can Help,
  Double Descend, or Hurt Generalization
The Curious Case of Adversarially Robust Models: More Data Can Help, Double Descend, or Hurt Generalization
Yifei Min
Lin Chen
Amin Karbasi
AAML
37
69
0
25 Feb 2020
Adversarial Machine Learning -- Industry Perspectives
Adversarial Machine Learning -- Industry Perspectives
Ramnath Kumar
Magnus Nyström
J. Lambert
Andrew Marshall
Mario Goertzel
Andi Comissoneru
Matt Swann
Sharon Xia
AAML
SILM
29
232
0
04 Feb 2020
MACER: Attack-free and Scalable Robust Training via Maximizing Certified
  Radius
MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius
Runtian Zhai
Chen Dan
Di He
Huan Zhang
Boqing Gong
Pradeep Ravikumar
Cho-Jui Hsieh
Liwei Wang
OOD
AAML
21
177
0
08 Jan 2020
12
Next