ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.10544
  4. Cited By
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses

Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses

18 December 2020
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
    SILM
ArXivPDFHTML

Papers citing "Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses"

48 / 148 papers shown
Title
Analyzing the Robustness of Decentralized Horizontal and Vertical
  Federated Learning Architectures in a Non-IID Scenario
Analyzing the Robustness of Decentralized Horizontal and Vertical Federated Learning Architectures in a Non-IID Scenario
Pedro Miguel Sánchez Sánchez
Alberto Huertas Celdrán
Enrique Tomás Martínez Beltrán
Daniel Demeter
Gérome Bovet
Gregorio Martínez Pérez
Burkhard Stiller
AAML
FedML
27
6
0
20 Oct 2022
Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor
  Attacks in Federated Learning
Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning
Yuxin Wen
Jonas Geiping
Liam H. Fowl
Hossein Souri
Ramalingam Chellappa
Micah Goldblum
Tom Goldstein
AAML
SILM
FedML
30
9
0
17 Oct 2022
Free Fine-tuning: A Plug-and-Play Watermarking Scheme for Deep Neural
  Networks
Free Fine-tuning: A Plug-and-Play Watermarking Scheme for Deep Neural Networks
Run Wang
Jixing Ren
Boheng Li
Tianyi She
Wenhui Zhang
Liming Fang
Jing Chen
Chao Shen
Lina Wang
WIGM
32
16
0
14 Oct 2022
COLLIDER: A Robust Training Framework for Backdoor Data
COLLIDER: A Robust Training Framework for Backdoor Data
H. M. Dolatabadi
S. Erfani
C. Leckie
AAML
17
7
0
13 Oct 2022
How to Sift Out a Clean Data Subset in the Presence of Data Poisoning?
How to Sift Out a Clean Data Subset in the Presence of Data Poisoning?
Yi Zeng
Minzhou Pan
Himanshu Jahagirdar
Ming Jin
Lingjuan Lyu
R. Jia
AAML
39
21
0
12 Oct 2022
Label Flipping Data Poisoning Attack Against Wearable Human Activity
  Recognition System
Label Flipping Data Poisoning Attack Against Wearable Human Activity Recognition System
A. Shahid
Ahmed Imteaj
Peter Y. Wu
Diane A. Igoche
Tauhidul Alam
AAML
11
9
0
17 Aug 2022
Lethal Dose Conjecture on Data Poisoning
Lethal Dose Conjecture on Data Poisoning
Wenxiao Wang
Alexander Levine
S. Feizi
FedML
13
15
0
05 Aug 2022
Holistic Robust Data-Driven Decisions
Holistic Robust Data-Driven Decisions
Amine Bennouna
Bart P. G. Van Parys
Ryan Lucas
OOD
36
21
0
19 Jul 2022
Security and Safety Aspects of AI in Industry Applications
Security and Safety Aspects of AI in Industry Applications
H. D. Doran
14
0
0
16 Jul 2022
Neurotoxin: Durable Backdoors in Federated Learning
Neurotoxin: Durable Backdoors in Federated Learning
Zhengming Zhang
Ashwinee Panda
Linyue Song
Yaoqing Yang
Michael W. Mahoney
Joseph E. Gonzalez
Kannan Ramchandran
Prateek Mittal
FedML
29
130
0
12 Jun 2022
Autoregressive Perturbations for Data Poisoning
Autoregressive Perturbations for Data Poisoning
Pedro Sandoval-Segura
Vasu Singla
Jonas Geiping
Micah Goldblum
Tom Goldstein
David Jacobs
AAML
25
40
0
08 Jun 2022
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free
Tianlong Chen
Zhenyu (Allen) Zhang
Yihua Zhang
Shiyu Chang
Sijia Liu
Zhangyang Wang
AAML
46
25
0
24 May 2022
Wild Patterns Reloaded: A Survey of Machine Learning Security against
  Training Data Poisoning
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Sebastiano Vascon
Werner Zellinger
Bernhard A. Moser
Alina Oprea
Battista Biggio
Marcello Pelillo
Fabio Roli
AAML
22
116
0
04 May 2022
Indiscriminate Data Poisoning Attacks on Neural Networks
Indiscriminate Data Poisoning Attacks on Neural Networks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
43
24
0
19 Apr 2022
Poisons that are learned faster are more effective
Poisons that are learned faster are more effective
Pedro Sandoval-Segura
Vasu Singla
Liam H. Fowl
Jonas Geiping
Micah Goldblum
David Jacobs
Tom Goldstein
6
17
0
19 Apr 2022
Towards A Critical Evaluation of Robustness for Deep Learning Backdoor
  Countermeasures
Towards A Critical Evaluation of Robustness for Deep Learning Backdoor Countermeasures
Huming Qiu
Hua Ma
Zhi-Li Zhang
A. Abuadbba
Wei Kang
Anmin Fu
Yansong Gao
ELM
AAML
23
15
0
13 Apr 2022
Energy-Latency Attacks via Sponge Poisoning
Energy-Latency Attacks via Sponge Poisoning
Antonio Emanuele Cinà
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
SILM
47
29
0
14 Mar 2022
Low-Rank Softmax Can Have Unargmaxable Classes in Theory but Rarely in
  Practice
Low-Rank Softmax Can Have Unargmaxable Classes in Theory but Rarely in Practice
Andreas Grivas
Nikolay Bogoychev
Adam Lopez
13
9
0
12 Mar 2022
On the Effectiveness of Adversarial Training against Backdoor Attacks
On the Effectiveness of Adversarial Training against Backdoor Attacks
Yinghua Gao
Dongxian Wu
Jingfeng Zhang
Guanhao Gan
Shutao Xia
Gang Niu
Masashi Sugiyama
AAML
32
22
0
22 Feb 2022
Identifying Backdoor Attacks in Federated Learning via Anomaly Detection
Identifying Backdoor Attacks in Federated Learning via Anomaly Detection
Yuxi Mi
Yiheng Sun
Jihong Guan
Shuigeng Zhou
AAML
FedML
11
1
0
09 Feb 2022
Improved Certified Defenses against Data Poisoning with (Deterministic)
  Finite Aggregation
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation
Wenxiao Wang
Alexander Levine
S. Feizi
AAML
20
60
0
05 Feb 2022
Can Adversarial Training Be Manipulated By Non-Robust Features?
Can Adversarial Training Be Manipulated By Non-Robust Features?
Lue Tao
Lei Feng
Hongxin Wei
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
81
16
0
31 Jan 2022
Security for Machine Learning-based Software Systems: a survey of
  threats, practices and challenges
Security for Machine Learning-based Software Systems: a survey of threats, practices and challenges
Huaming Chen
Muhammad Ali Babar
AAML
37
21
0
12 Jan 2022
On the Security & Privacy in Federated Learning
On the Security & Privacy in Federated Learning
Gorka Abad
S. Picek
Víctor Julio Ramírez-Durán
A. Urbieta
44
11
0
10 Dec 2021
Availability Attacks Create Shortcuts
Availability Attacks Create Shortcuts
Da Yu
Huishuai Zhang
Wei Chen
Jian Yin
Tie-Yan Liu
AAML
31
57
0
01 Nov 2021
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation
  Models
BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models
Kangjie Chen
Yuxian Meng
Xiaofei Sun
Shangwei Guo
Tianwei Zhang
Jiwei Li
Chun Fan
SILM
23
106
0
06 Oct 2021
Sample Efficient Detection and Classification of Adversarial Attacks via
  Self-Supervised Embeddings
Sample Efficient Detection and Classification of Adversarial Attacks via Self-Supervised Embeddings
Mazda Moayeri
S. Feizi
AAML
21
19
0
30 Aug 2021
Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on
  Production Federated Learning
Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated Learning
Virat Shejwalkar
Amir Houmansadr
Peter Kairouz
Daniel Ramage
AAML
34
213
0
23 Aug 2021
Evaluating Large Language Models Trained on Code
Evaluating Large Language Models Trained on Code
Mark Chen
Jerry Tworek
Heewoo Jun
Qiming Yuan
Henrique Pondé
...
Bob McGrew
Dario Amodei
Sam McCandlish
Ilya Sutskever
Wojciech Zaremba
ELM
ALM
78
5,055
0
07 Jul 2021
Adversarial Examples Make Strong Poisons
Adversarial Examples Make Strong Poisons
Liam H. Fowl
Micah Goldblum
Ping Yeh-Chiang
Jonas Geiping
Wojtek Czaja
Tom Goldstein
SILM
26
132
0
21 Jun 2021
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks
  Trained from Scratch
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Hossein Souri
Liam H. Fowl
Ramalingam Chellappa
Micah Goldblum
Tom Goldstein
SILM
31
123
0
16 Jun 2021
Turning Federated Learning Systems Into Covert Channels
Turning Federated Learning Systems Into Covert Channels
Gabriele Costa
Fabio Pinelli
S. Soderi
Gabriele Tolomei
FedML
37
10
0
21 Apr 2021
SGBA: A Stealthy Scapegoat Backdoor Attack against Deep Neural Networks
SGBA: A Stealthy Scapegoat Backdoor Attack against Deep Neural Networks
Yingzhe He
Zhili Shen
Chang Xia
Jingyu Hua
Wei Tong
Sheng Zhong
AAML
8
6
0
02 Apr 2021
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with
  Differentially Private Data Augmentations
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations
Eitan Borgnia
Jonas Geiping
Valeriia Cherepanova
Liam H. Fowl
Arjun Gupta
Amin Ghiasi
Furong Huang
Micah Goldblum
Tom Goldstein
32
46
0
02 Mar 2021
What Doesn't Kill You Makes You Robust(er): How to Adversarially Train
  against Data Poisoning
What Doesn't Kill You Makes You Robust(er): How to Adversarially Train against Data Poisoning
Jonas Geiping
Liam H. Fowl
Gowthami Somepalli
Micah Goldblum
Michael Moeller
Tom Goldstein
TDI
AAML
SILM
27
40
0
26 Feb 2021
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure
  Dataset Release
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release
Liam H. Fowl
Ping Yeh-Chiang
Micah Goldblum
Jonas Geiping
Arpit Bansal
W. Czaja
Tom Goldstein
16
43
0
16 Feb 2021
Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial
  Training
Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training
Lue Tao
Lei Feng
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
34
71
0
09 Feb 2021
Unlearnable Examples: Making Personal Data Unexploitable
Unlearnable Examples: Making Personal Data Unexploitable
Hanxun Huang
Xingjun Ma
S. Erfani
James Bailey
Yisen Wang
MIACV
156
190
0
13 Jan 2021
With False Friends Like These, Who Can Notice Mistakes?
With False Friends Like These, Who Can Notice Mistakes?
Lue Tao
Lei Feng
Jinfeng Yi
Songcan Chen
AAML
13
5
0
29 Dec 2020
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks
  Without an Accuracy Tradeoff
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff
Eitan Borgnia
Valeriia Cherepanova
Liam H. Fowl
Amin Ghiasi
Jonas Geiping
Micah Goldblum
Tom Goldstein
Arjun Gupta
AAML
6
127
0
18 Nov 2020
Concealed Data Poisoning Attacks on NLP Models
Concealed Data Poisoning Attacks on NLP Models
Eric Wallace
Tony Zhao
Shi Feng
Sameer Singh
SILM
19
18
0
23 Oct 2020
VenoMave: Targeted Poisoning Against Speech Recognition
VenoMave: Targeted Poisoning Against Speech Recognition
H. Aghakhani
Lea Schonherr
Thorsten Eisenhofer
D. Kolossa
Thorsten Holz
Christopher Kruegel
Giovanni Vigna
AAML
8
17
0
21 Oct 2020
Backdoor Learning: A Survey
Backdoor Learning: A Survey
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
45
586
0
17 Jul 2020
Backdoors in Neural Models of Source Code
Backdoors in Neural Models of Source Code
Goutham Ramakrishnan
Aws Albarghouthi
AAML
SILM
28
56
0
11 Jun 2020
Threats to Federated Learning: A Survey
Threats to Federated Learning: A Survey
Lingjuan Lyu
Han Yu
Qiang Yang
FedML
202
434
0
04 Mar 2020
Model-Reuse Attacks on Deep Learning Systems
Model-Reuse Attacks on Deep Learning Systems
Yujie Ji
Xinyang Zhang
S. Ji
Xiapu Luo
Ting Wang
SILM
AAML
134
186
0
02 Dec 2018
SentiNet: Detecting Localized Universal Attacks Against Deep Learning
  Systems
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
168
287
0
02 Dec 2018
Analyzing Federated Learning through an Adversarial Lens
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
191
1,032
0
29 Nov 2018
Previous
123